Deepfake Face Detection Using LSTM and CNN
Keywords:
Python, Convolutional Neural Networks (CNN) and Long Short-Term Memory (LSTM) classifiers, Anaconda Navigator – SpyderAbstract
The fast evolvement of deepfake introduction tech- nology is critically threating media facts trustworthiness. The consequences impacting focused individuals and establishments may be dire. In this work, we study the evolutions of deep studying architectures, especially CNNs and Transformers. We diagnosed 8 promising deep learning architectures, designed and evolved our deepfake detection fashions and conducted experiments over well-installed deepfake datasets. those datasets protected the modern 2nd and third generation deepfake datasets. We evaluated the effectiveness of our evolved unmarried version detectors in deepfake detection and move datasets evaluations. This have a look at introduces a comprehensive methodology for facial picture evaluation, starting with a cautiously curated dataset of photos in either ’.jpg’ or ’.png’ formats. This standard- ization is essential for the subsequent characteristic extraction technique, which makes a speciality of capturing important characteristics of the faces through local capabilities consisting of imply, widespread deviation, and variance. these statistical metrics provide a robust basis for information versions in facial attributes, enhancing the model’s potential to distinguish among diverse identities. In to addition improve the overall performance of the version, a Transformer architecture is leveraged for face detection, coupled with advanced data augmentation techniques that increase the dataset and help the model generalize better to unseen pix. The heart of the evaluation relies on a sophisticated deep getting to know framework that integrates Convolutional Neural Networks (CNN) and Long Short Term Memory(LSTM) classifiers. This hybrid method capitalizes at the strengths of each architectures: CNNs excel in spatial function extraction from pix, while LSTMs are adept at taking pictures temporal dependencies, making them appropriate for sequences of information. The dataset is judiciously cut up into schooling (90) and testing (10) subsets to facilitate effective model schooling and assessment. overall performance metrics, mainly accuracy and mistakes rates, are hired to assess the model’s effectiveness in face reputation responsibilities. by using analyzing these metrics, the take a look at provides valuable insights into the strengths and limitations of the proposed technique, laying the foundation for destiny upgrades in facial picture analysis and reputation technology.
Downloads
References
Wang, Y., and Zhang, X. (2021). A comprehensive review of deepfake detection techniques. IEEE Transactions on Information Forensics and Security, 16, 1015-1031.
Patel, R., and Kumar, S. (2021). Real-time deepfake detection using CNNs. Journal of Computer Vision, 129(4), 546-558..
Lee, J., and Chen, M. (2022). An LSTM-based approach for deepfake video detection. International Journal of Multimedia and Ubiquitous Engineering, 17(2), 231-240
Nguyen, A., and Torres, J. (2022). Multi-modal deepfake detection using Transformers. ACM Transactions on Multimedia Computing, 18(3), 1- 17.
Zhao, Q., and Li, Y. (2022). Benchmarking deepfake detection tech- niques: A comparative study. Journal of Digital Forensics, Security and Law, 17(1), 23-38.
Zhang, H., and Liu, T. (2023). Towards explainable deepfake detection with AI. IEEE Access, 11, 15012-15025.
Chen, R., and Xu, J. (2023). Adversarial training for robust deepfake detection. Neural Computing and Applications, 35(10), 12345-12360.
Kumar, A., and Singh, V. (2023). Hybrid deep learning framework for facial forgery detection. Pattern Recognition Letters, 160, 112-118.
Gupta, P., and Sharma, N. (2023). Social media and deepfake challenges: A detection perspective. Journal of Cybersecurity, 5(2), 78-89.
Liu, Y., and Zhang, F. (2023). Improving deepfake detection with attention mechanisms. Computer Vision and Image Understanding, 205, 103-114.
Korshunov, P., and Kovyazina, M. (2021). Deepfake detection: Current challenges and future directions. Proceedings of the IEEE International Conference on Image Processing, 2021, 2043-2047.
Afchar, D., et al. (2018). Mesonet: A compact facial video forgery de- tection network. Proceedings of the 2018 IEEE International Workshop on Information Forensics and Security, 1-6.
Fakespot, M. and Eagledive, K. (2021). Combating deepfakes: Tech- niques and challenges. Journal of Information Security and Applications, 57, 102-112
Burch, J. and Lee, M. (2022). Analysis of synthetic media: Addressing the deepfake challenge. AI and Society, 37(3), 329-345.
15. Kalyan, A. and Prakash, A. (2023). Real-time detection of deepfake videos using machine learning techniques. International Journal of Computer Applications, 182(14), 8-14.
Yadav, P., and Singh, R. (2021). A survey on deepfake detection techniques. Journal of Ambient Intelligence and Humanized Computing, 12(3), 3513-3530.
Dong, W. et al. (2021). Detecting deepfake videos via spatiotemporal features. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2040-2048.
Rouhi, A. et al. (2022). An overview of deepfake detection and its ethical implications. Computers in Human Behavior, 128, 107107.
Ganaie, M.A., and Lee, H. (2023). Deepfake detection based on recurrent neural networks. Multimedia Tools and Applications, 82(12), 16732- 16747.
Zhuang, H., and Zhao, H. (2021). Fake detection in facial images using deep learning. Expert Systems with Applications, 172, 114627.
Downloads
Published
How to Cite
Issue
Section
License

This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.
All papers should be submitted electronically. All submitted manuscripts must be original work that is not under submission at another journal or under consideration for publication in another form, such as a monograph or chapter of a book. Authors of submitted papers are obligated not to submit their paper for publication elsewhere until an editorial decision is rendered on their submission. Further, authors of accepted papers are prohibited from publishing the results in other publications that appear before the paper is published in the Journal unless they receive approval for doing so from the Editor-In-Chief.
IJISAE open access articles are licensed under a Creative Commons Attribution-ShareAlike 4.0 International License. This license lets the audience to give appropriate credit, provide a link to the license, and indicate if changes were made and if they remix, transform, or build upon the material, they must distribute contributions under the same license as the original.