CNN-Driven Enhancement in Facial Emotion Recognition Systems

Authors

  • V. Ajitha, T. S. Suganya, S. Irin Sherly, P. Jose, C. Gnanaprakasam, M. Krishnaraj

Keywords:

Facial Emotion Recognition, Convolutional Neural Networks, Data augmentation, Transfer learning, FER2013 dataset.

Abstract

Facial Emotion Recognition is crucial for developing systems used in human-computer interaction, mental health diagnostics, and security applications. Accurately detecting and  classifying human emotions from facial expressions is essential in these areas. This study aims to create a robust and high-accuracy model for recognizing and classifying human emotions from facial expressions using convolutional neural networks (CNNs). It emphasizes the use of transfer learning and data augmentation to improve the performance of the model on FER2013, which contains 35,887 black and white images representing seven emotions (surprise, happiness, sadness, disgust, anger, fear, and neutral). Each image is resized to 48x48 pixels to reduce computational complexity. The proposed model uses a pre-trained VGG16 CNN, fine-tuned with the FER2013 data. Preprocessing steps include resizing images to 224x224 pixels, normalization, and applying data augmentation techniques like rotation, scaling, and shifting to improve model robustness. The CNN analyzes features present in the input images and then runs them through fully connected layers to produce a distribution of probabilities across seven emotional categories using a softmax activation function. The algorithm is trained using the categorical cross-entropy loss function and the Adam optimizer, initializing with a learning rate of 0.001. This rate decreases tenfold every ten epochs. The batch size is 32, and training lasts for 50 epochs with early stopping. The use of transfer learning and data augmentation significantly improved the model's accuracy, achieving 83% on the test set. Performance measures like accuracy, completeness, and the F1-score were computed for each emotion category with the 'happy' class showing the highest precision (0.92) and recall (0.93). The confusion matrix revealed a high true positive rate and identified areas of misclassification, particularly between similar emotions like sadness and neutral. The training and validation loss curves indicated a good fit without overfitting, as shown by the close convergence of these values. Throughout the training period, the level of precision consistently rose, leveling off at approximately 83% for both the training and validation datasets, indicating a steady improvement in accuracy. The model shows potential for applications in mental health monitoring, customer feedback analysis, and personalized content delivery. Future work will focus on refining the model and applying it to real-world scenarios to enhance its robustness and accuracy in diverse environments.

Downloads

Download data is not yet available.

References

K. Fujii, D. Sugimura and T. Hamamoto, "Hierarchical Group-level Emotion Recognition in the Wild," 2019 14th IEEE International Conference on Automatic Face & Gesture Recognition (FG 2019), Lille, France, 2019, pp. 1-5, doi: 10.1109/FG.2019.8756573.

Ranjan, Rakesh, and Bikash Chandra Sahana. "An efficient facial feature extraction method based supervised classification model for human facial emotion identification," In 2019 IEEE International Symposium on Signal Processing and Information Technology (ISSPIT), pp. 1-6. IEEE, 2019.

Sharma, Garima, Latika Singh, and Sumanlata Gautam. "Facial Feature Extraction for Emotion Classification Using Fuzzy C-Mean Clustering," Recent Advances in Computer Science and Communications (Formerly: Recent Patents on Computer Science) 14, no. 7 (2021): 2210-2219.

Tribedi, Sabyasachi, and Ranjit Kumar Barai. "Generating Context-Free Group-Level Emotion Landscapes Using Image Processing and Shallow Convolutional Neural Networks," In Progress in Computing, A Smart Agriculture Friendly Intelligent Flying Robot Design to Identify Plant Leaf Diseases by using Digital Image Processing with Learning ApproachAnalytics and Networking: Proceedings of ICCAN 2019, pp. 313-325. Springer Singapore, 2020.

Korkmaz, Timuçin, and Hamza Erol. "Classification Of Human Facial Expressions For Emotion Recognition Using A Distributed Computer System," In 2020 5th International Conference on Computer Science and Engineering (UBMK), pp. 1-6. IEEE, 2020.

.Kehri, Vikram, and R. N. Awale. "A facial EMG data analysis for emotion classification based on spectral kurtogram and CNN," International Journal of digital signals and smart systems 4, no. 1-3 (2020): 50-63.

Kim, Jun-Hwa, and Chee Sun Won. "Emotion enhancement for facial images using GAN," In 2020 IEEE International Conference on Consumer Electronics-Asia (ICCE-Asia), pp. 1-4. IEEE, 2020.

Arya, R., and E. R. Vimina. "An evaluation of local binary descriptors for facial emotion classification," Innovations in Computer Science and Engineering: Proceedings of 7th ICICSE (2020): 195-205.

Abbassi, Nessrine, Rabie Helaly, Mohamed Ali Hajjaji, and Abdellatif Mtibaa. "A deep learning facial emotion classification system: a VGGNet-19 based approach," In 2020 20th International Conference on Sciences and Techniques of Automatic Control and Computer Engineering (STA), pp. 271-276. IEEE, 2020.

Agastya, Wisnu, and Hanny Haryanto. "Sequential Model for Mapping Compound Emotions in Indonesian Sentences," Journal of Applied Intelligent System 5, no. 1 (2020): 32-46.

Karbauskaitė, Rasa, Leonidas Sakalauskas, and Gintautas Dzemyda. "Kriging predictor for facial emotion recognition using numerical proximities of human emotions," Informatica 31, no. 2 (2020): 249-275.

Ragupathy, P., and P. Vivekanandan,A modified fuzzy histogram of optical flow for emotion classification," Journal of Ambient Intelligence and Humanized Computing 12, no. 3 (2021): 3601-3608.

Kim, Jaemyung, Jin-Ku Kang, and Yongwoo Kim. "A resource efficient integer-arithmetic-only FPGA-based CNN accelerator for real-time facial emotion recognition," IEEE Access 9 (2021): 104367-104381.

Fujii, Katsuya, Daisuke Sugimura, and Takayuki Hamamoto. "Hierarchical group-level emotion recognition," IEEE Transactions on Multimedia 23 (2020): 3892-3906.

Gund, Manasi, Abhiram Ravi Bharadwaj, and Ifeoma Nwogu. "Interpretable emotion classification using temporal convolutional models," In 2020 25th international conference on pattern recognition (ICPR), pp. 6367-6374. IEEE, 2021.

Fnaiech, Ahmed, Hanene Sahli, Mounir Sayadi, and Philippe Gorce. "Fear facial emotion recognition based on angular deviation," Electronics 10, no. 3 (2021): 358.

Poulose, Alwin, Jung Hwan Kim, and Dong Seog Han. "Feature vector extraction technique for facial emotion recognition using facial landmarks," In 2021 International Conference on Information and Communication Technology Convergence (ICTC), pp. 1072-1076. IEEE, 2021.

Wang, Shuai, Jingzi Qu, Yong Zhang, and Yidie Zhang. "Multimodal emotion recognition from EEG signals and facial expressions," IEEE Access 11 (2023): 33061-33068.

Wang, Jiawen, and Leah Kawka. "GiMeFive: Towards Interpretable Facial Emotion Classification," arXiv preprint arXiv:2402.15662 2024.

M.S.Kavitha, R. Harikrishnan, S. Vijay Shankar, S. Krishnakumari, S. Irin Sherly,” A Smart Agriculture Friendly Intelligent Flying Robot Design to Identify Plant Leaf Diseases by using Digital Image Processing with Learning Approach,” 2023 International Conference on Innovative Computing, Intelligent Communication and Smart Electrical Systems (ICSES), 2023, IEEE, DOI: 10.1109/ICSES60034.2023.10465304.

T.Judgi, R. Anitha, S.Padma, G.Nishanthi, S. Irin Sherly,” A Secured Framework Model to Design Electronic Voting System using Block Chain Methodology,” 2023 International Conference on Innovative Computing, Intelligent Communication and Smart Electrical Systems (ICSES), 2023, IEEE, DOI: 10.1109/ICSES60034.2023.10465374.

Geetha, M. Gomathi, A. Jayalakshmi, K. Neela, S. Irin Sherly,” A Novel Methodology to Identify Autism Disorder in Earlier Stages using Artificial Intelligence Assisted Hybrid Learning Scheme,” 2023 International Conference on Innovative Computing, Intelligent Communication and Smart Electrical Systems (ICSES), 2023, IEEE, DOI: 10.1109/ICSES60034.2023.10465373.

Irin Sherly, P. M. Kavitha, S. Jayachandran, M. Robinson Joel, R. Jeena,” Empowering the Visually Impaired: Arduino Mega Enhanced Accessibility Device,” International Journal on Recent and Innovation Trends in Computing and Communication, Volume: 11 Issue: 8s, DOI: https://doi.org/10.17762/ijritcc.v11i8s.7223, 2023.

Prabu Sankar N et. al., “Study of ECG Analysis based Cardiac Disease Prediction using Deep Learning Techniques,” International Journal of Intelligent Systems and Applications in Engineering, 2023, 11(4), 431–438.

S.Irin Sherly, G.Mathivanan,” An Ensemble Basedheart Disease Predictionusing Gradient Boosting Decision Tree,” Turkish Journal of Computer and Mathematics Education, Vol.12, No.10, 3648-3660, 2021.

S.Irin Sherly, G.Mathivanan,” An efficient honey badger based Faster region CNN for chronic heart failure prediction,” Biomedical Signal Processing and Control, Vol. 79, 104165, 2023.

S.Irin Sherly, G.Mathivanan,” ECG Signal Noises Versus Filters for Signal Quality Improvement”, 2021 International Conference on Advances in Electrical, Computing, Communication and Sustainable Technologies (ICAECT), IEEE, DOI: 10.1109/ICAECT49130.2021.9392621, 2021.

Downloads

Published

12.06.2024

How to Cite

V. Ajitha. (2024). CNN-Driven Enhancement in Facial Emotion Recognition Systems. International Journal of Intelligent Systems and Applications in Engineering, 12(4), 2343 –. Retrieved from https://ijisae.org/index.php/IJISAE/article/view/6620

Issue

Section

Research Article