Advancements in Video Deepfake Detection: Integration of ResNet50, EfficientNetB7, and Efficient NetAutoAtt B4 Models
Keywords:
Convolutional neural networks, Deepfakes, Deep Learning, Efficient Net, GANAbstract
The study aims to foster responsible advances in facial manipulation techniques by developing reliable detection methods for deepfakes. The proposed model, based on the average of three separate frameworks, was trained using various datasets and all false positives and negatives. It outperforms other deepfake detection techniques in detecting both dynamic and static deepfakes. The model’s architecture is robust, requires minimal computing power, and is adaptable to unique features of deepfake detection, making it suitable for real-world deployment. The model also aids in further research to optimize ensemble models and evaluate advanced training methods like knowledge distillation. Its efficacy, efficiency, and scalability provide a feasible approach in the fight against deepfakes and serve as a foundation for future research and development in deepfake detection technologies. The novelty lies in the model’s potential for real-life implementation and its proven effectiveness in mitigating challenges related to deepfake video detection.
Downloads
References
N. Jain et al., “Deepfake Technology and Image Forensics: Advancements, Challenges, and Ethical Implications in Synthetic Media Detection,” International Journal of Intelligent Systems and Applications in Engineering, vol. 12, no. 16s, pp. 49–58, Feb. 2024, [Online]. Available: https://www.ijisae.org/index.php/IJISAE/article/view/4782
S. R. Krishnan and P. Amudha, “International Journal of INTELLIGENT SYSTEMS AND APPLICATIONS IN ENGINEERING Hybrid ResNet-50 and LSTM Approach for Effective Video Anomaly Detection in Intelligent Surveillance Systems.” [Online]. Available: www.ijisae.org
M. Tan and Q. V. Le, “EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks,” May 2019, doi: https://doi.org/10.48550/arXiv.1905.11946.
A. Seth and A. K. Gogineni, “Detection of Deep-fakes in Videos using CNN and Transformers”, doi: 10.13140/RG.2.2.23238.60480.
B. Dolhansky et al., “The DeepFake Detection Challenge (DFDC) Dataset,” Jun. 2020, doi: https://doi.org/10.48550/arXiv.2006.07397.
Y. Li, X. Yang, P. Sun, H. Qi, and S. Lyu, “Celeb-DF: A Large-scale Challenging Dataset for DeepFake Forensics,” Sep. 2019, doi: https://doi.org/10.48550/arXiv.1909.12962.
A. Rossler, D. Cozzolino, L. Verdoliva, C. Riess, J. Thies, and M. Niessner, “FaceForensics++: Learning to detect manipulated facial images,” in Proceedings of the IEEE International Conference on Computer Vision, 2019. doi: 10.1109/ICCV.2019.00009.
M. Tan and Q. V. Le, “EfficientNetV2: Smaller Models and Faster Training,” Apr. 2021, [Online]. Available: http://arxiv.org/abs/2104.00298
J. C. Neves, R. Tolosana, R. Vera-Rodriguez, V. Lopes, H. Proença, and J. Fierrez, “GANprintR: Improved Fakes and Evaluation of the State of the Art in Face Manipulation Detection,” IEEE Journal on Selected Topics in Signal Processing, vol. 14, no. 5, 2020, doi: 10.1109/JSTSP.2020.3007250.
P. Reeva, D. Siddhesh, G. Preet, S. Pratik, and N. Jain, “Digital Forensics Capability Analyzer: A tool to check forensic capability,” in 2019 International Conference on Nascent Technologies in Engineering, ICNTE 2019 - Proceedings, 2019. doi: 10.1109/ICNTE44896.2019.8945960.
N. Bonettini, L. Bondi, E. D. Cannas, P. Bestagini, S. Mandelli, and S. Tubaro, “Video face manipulation detection through ensemble of CNNs,” in Proceedings - International Conference on Pattern Recognition, Institute of Electrical and Electronics Engineers Inc., 2020, pp. 5012–5019. doi: 10.1109/ICPR48806.2021.9412711.
A. Berroukham, K. Housni, M. Lahraichi, and I. Boulfrifi, “Deep learning-based methods for anomaly detection in video surveillance: a review,” Bulletin of Electrical Engineering and Informatics, vol. 12, no. 1, 2023, doi: 10.11591/eei.v12i1.3944.
H. T. Duong, V. T. Le, and V. T. Hoang, “Deep Learning-Based Anomaly Detection in Video Surveillance: A Survey,” Sensors, vol. 23, no. 11. 2023. doi: 10.3390/s23115024.
S. Atas, I. Ilhan, and M. Karakse, “An Efficient Deepfake Video Detection Approach with Combination of EfficientNet and Xception Models Using Deep Learning,” in 2022 26th International Conference on Information Technology, IT 2022, 2022. doi: 10.1109/IT54280.2022.9743542.
S. Ganguly, A. Ganguly, S. Mohiuddin, S. Malakar, and R. Sarkar, “ViXNet: Vision Transformer with Xception Network for deepfakes based video and image forgery detection,” Expert Syst Appl, vol. 210, Dec. 2022, doi: 10.1016/j.eswa.2022.118423.
L. Bondi, E. Daniele Cannas, P. Bestagini, and S. Tubaro, “Training Strategies and Data Augmentations in CNN-based DeepFake Video Detection,” in 2020 IEEE International Workshop on Information Forensics and Security, WIFS 2020, Institute of Electrical and Electronics Engineers Inc., Dec. 2020. doi: 10.1109/WIFS49906.2020.9360901.
Downloads
Published
How to Cite
Issue
Section
License
This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.
All papers should be submitted electronically. All submitted manuscripts must be original work that is not under submission at another journal or under consideration for publication in another form, such as a monograph or chapter of a book. Authors of submitted papers are obligated not to submit their paper for publication elsewhere until an editorial decision is rendered on their submission. Further, authors of accepted papers are prohibited from publishing the results in other publications that appear before the paper is published in the Journal unless they receive approval for doing so from the Editor-In-Chief.
IJISAE open access articles are licensed under a Creative Commons Attribution-ShareAlike 4.0 International License. This license lets the audience to give appropriate credit, provide a link to the license, and indicate if changes were made and if they remix, transform, or build upon the material, they must distribute contributions under the same license as the original.