The rapid advancement of deepfake technology has raised significant concerns regarding misinformation, privacy breaches, and digital fraud. Existing deepfake detection models, particularly those based on deep learning, often require high computational resources, making them unsuitable for real-time applications on mobile devices. This study aims to develop a lightweight deepfake detection model that enhances accuracy while maintaining computational efficiency. To achieve this, we propose a hybrid approach that integrates Fast Fourier Transform (FFT), MobileNet, and an Attention mechanism. The FFT component enables frequency-domain analysis to detect subtle deepfake artifacts, while MobileNet provides a lightweight convolutional backbone, and the Attention layer enhances feature extraction. The proposed model was evaluated on a benchmark deepfake dataset, and the results demonstrated its superior performance compared to the standard MobileNet model. Specifically, the model achieved an accuracy of 94.2%, an F1-score of 93.8%, and a computational efficiency improvement of 27.5% in comparison to conventional CNN-based approaches. These findings indicate that the integration of FFT and Attention mechanisms significantly enhances the model's capability to distinguish real and manipulated media while reducing computational overhead. The contribution of this study lies in presenting a deepfake detection model that balances accuracy and efficiency, making it suitable for deployment in mobile and resource-constrained environments. Future research should explore further optimization for energy efficiency, the adoption of lightweight Transformer architectures, and extensive testing on diverse datasets to improve robustness against real-world variations.