Facial Emotion Recognition using Three-Layer ConvNet with Diversity in Data and Minimum Epochs
Keywords:
Convolutional Neural Network (CNN), Facial Expression, Emotions, Epochs, AccuracyAbstract
Human emotions can be identified by recognizing their facial expressions. This relates to the applications such as medical diagnostics, human emotional analysis, human-robot interaction, etc. This study presents a novel Convolution Neural Network (CNN) model for recognizing emotions from facial images. The proposed model, “ConvNet-3”, recognizes the emotions such as disgust, anger, fear, happy, sad, surprise and neutral. The main focus of the proposed research is on training accuracy of the model in lesser number of epochs. The proposed model is trained on FER2013 dataset and its performance is evaluated. ConvNet-3 consists of 3 convolution layers and two fully connected layers. As illustrated in experimental results, the ConvNet-3 obtains training accuracy of 88% and validation accuracy of 61% on FER2013 which is better than existing models. In contrast it is observed that the presented model over fits on CK+48 dataset.
Downloads
References
L. Nwosu, H. Wang, J. Lu, I. Unwala, X. Yang, and T. Zhang, “Deep convolutional neural network for facial expression recognition using facial parts,” in 2017 IEEE 15th Intl Conf on Dependable, Autonomic and Secure Computing, 15th Intl Conf on Pervasive Intelligence and Computing, 3rd Intl Conf on Big Data Intelligence and Computing and Cyber Science and Technology Congress. IEEE, 2017, pp. 1318–1321.
R. Ekman, “What the face reveals: Basic and applied studies of spontaneous expression using the Facial Action Coding System (FACS)”. Oxford University Press, USA, 1997.
N. Kumar and D. Bhargava, “A scheme of features fusion for facial expression analysis: A facial action recognition,” Journal of Statistics and Management Systems, vol. 20, no. 4, pp. 693–701, 2017.
B. Yang, X. Xiang, D. Xu, X. Wang, and X. Yang, “3d palmprint recognition using shape index representation and fragile bits,” Multimedia Tools and Applications, vol. 76, no. 14, pp. 357–375, 2017.
G. Tzimiropoulos and M. Pantic, “Fast algorithms for fitting active appearance models to unconstrained images,” International journal of computer vision, vol. 122, no. 1, pp. 17–33, 2017.
Sudhakar, C. V., & Reddy, G. U. . (2022). Land use Land cover change Assessment at Cement Industrial area using Landsat data-hybrid classification in part of YSR Kadapa District, Andhra Pradesh, India. International Journal of Intelligent Systems and Applications in Engineering, 10(1), 75–86. https://doi.org/10.18201/ijisae.2022.270
H. Jung, S. Lee, J. Yim, S. Park, and J. Kim, “Joint fine-tuning in deep neural networks for facial expression recognition,” in Proceedings of the IEEE international conference on computer vision, 2015, pp. 2983–2991.
G. Zhao and M. Pietikainen, “Dynamic texture recognition using local binary patterns with an application to facial expressions,” IEEE transactions on pattern analysis and machine intelligence, vol. 29, no. 6, pp. 915–928, 2007.
X. Zhao, X. Liang, L. Liu, T. Li, Y. Han, N. Vasconcelos, and S. Yan, “Peak-piloted deep network for facial expression recognition,” in European conference on computer vision. Springer, 2016, pp. 425–442.
H. Zhang, A. Jolfaei, and M. Alazab, “A face emotion recognition method using convolutional neural network and image edge computing,” IEEE Access, vol. 7, pp. 159 081–159 089, 2019.
I. J. Goodfellow, D. Erhan, P. L. Carrier, A. Courville, M. Mirza, B. Hamner, W. Cukierski, Y. Tang, D. Thaler, D.-H. Lee et al., “Challenges in representation learning: A report on three machine learning contests,” in International conference on neural information processing. Springer, 2013, pp. 117–124.
Z. Yu and C. Zhang, “Image based static facial expression recognition with multiple deep network learning,” in Proceedings of the 2015 ACM on international conference on multimodal interaction, 2015, pp. 435–442.
S. E. Kahou, C. Pal, X. Bouthillier, P. Froumenty, Ç. Gülçehre, R. Memisevic, P. Vincent, A. Courville, Y. Bengio, R. C. Ferrari et al., “Combining modality specific deep neural networks for emotion recognition in video,” in Proceedings of the 15th ACM on International conference on multimodal interaction, 2013, pp. 543–550.
C. Mayer, M. Eggers, and B. Radig, “Cross-database evaluation for facial expression recognition,” Pattern recognition and image analysis, vol. 24, no. 1, pp. 124–132, 2014.
Y. Tang, “Deep learning using linear support vector machines,” arXiv preprint arXiv:1306.0239, 2013.
C.-E. J. Li and L. Zhao, “Emotion recognition using convolutional neural networks,” in Purdue Undergraduate Research Conference.63, 2019.
N. Christou and N. Kanojiya, “Human facial expression recognition with convolution neural networks,” in Third International Congress on Information and Communication Technology. Springer, 2019, pp. 539–545.
Linda R. Musser. (2020). Older Engineering Books are Open Educational Resources. Journal of Online Engineering Education, 11(2), 08–10. Retrieved from http://onlineengineeringeducation.com/index.php/joee/article/view/41
S. M. González-Lozoya, J. de la Calleja, L. Pellegrin, H. J. Escalante, M. A. Medina, and A. Benitez-Ruiz, “Recognition of facial expressions based on cnn features,” Multimedia Tools and Applications, pp. 1–21, 2020.
A. Christy, S. Vaithyasubramanian, A. Jesudoss, and M. A. Praveena, “Multimodal speech emotion recognition and classification using convolutional neural network techniques,” International Journal of Speech Technology, vol. 23, pp. 381–388, 2020.
S. Singh and F. Nasoz, “Facial expression recognition with convolutional neural networks,” in 2020 10th Annual Computing and Communication Workshop and Conference (CCWC). IEEE, 2020, pp. 0324–0328.
F. Wang, S. Wu, W. Zhang, Z. Xu, Y. Zhang, C. Wu, and S. Coleman, “Emotion recognition with convolutional neural network and eeg-based efdms,” Neuropsychologia, vol. 146, p. 107506, 2020.
H. Jung, S. Lee, J. Yim, S. Park, and J. Kim, “Joint fine-tuning in deep neural networks for facial expression recognition,” in Proceedings of the IEEE international conference on computer vision, 2015, pp. 2983–2991.
Gill, D. R. . (2022). A Study of Framework of Behavioural Driven Development: Methodologies, Advantages, and Challenges. International Journal on Future Revolution in Computer Science &Amp; Communication Engineering, 8(2), 09–12. https://doi.org/10.17762/ijfrcsce.v8i2.2068
F. Nonis, N. Dagnes, F. Marcolin, and E. Vezzetti, “3d approaches and challenges in facial expression recognition algorithms—a literature review,” Applied Sciences, vol. 9, no. 18, p. 3904, 2019.
J. D. Bodapati and N. Veeranjaneyulu, “Facial emotion recognition using deep cnn based features,” 2019.
J. Haddad, O. Lézoray, and P. Hamel, “3d-cnn for facial emotion recognition in videos,” in International Symposium on Visual Computing. Springer, 2020, pp. 298–309.
A. Mollahosseini, D. Chan, and M. H. Mahoor, “Going deeper in facial expression recognition using deep neural networks,” in 2016 IEEE Winter conference on applications of computer vision (WACV). IEEE, 2016, pp. 1–10.
Anusha, D. J. ., R. . Anandan, and P. V. . Krishna. “Modified Context Aware Middleware Architecture for Precision Agriculture”. International Journal on Recent and Innovation Trends in Computing and Communication, vol. 10, no. 7, July 2022, pp. 112-20, doi:10.17762/ijritcc.v10i7.5635.
Z. Yu and C. Zhang, “Image based static facial expression recognition with multiple deep network learning,” in Proceedings of the 2015 ACM on international conference on multimodal interaction, 2015, pp. 435–442.
I. J. Goodfellow, D. Erhan, P. L. Carrier, A. Courville, M. Mirza, B. Hamner, W. Cukierski, Y. Tang, D. Thaler, D.-H. Lee et al., “Challengesin representation learning: A report on three machine learning contests,” in International conference on neural information processing. Springer, 2013, pp. 117–124.
A. Kumar and G. Garg, “Sentiment analysis of multimodal twitter data,” Multimedia Tools and Applications, vol. 78, no. 17, pp. 24 103–24 119, 2019.
Y.-L. Wu, H.-Y. Tsai, Y.-C. Huang, and B.-H. Chen, “Accurate emotion recognition for driving risk prevention in driver monitoring system,” in 2018 IEEE 7th Global Conference on Consumer Electronics (GCCE). IEEE, 2018, pp. 796–797.
P. Giannopoulos, I. Perikos, and I. Hatzilygeroudis, “Deep learning approaches for facial emotion recognition: A case study on fer-2013,” in Advances in hybridization of intelligent methods. Springer, 2018, pp. 1–16.
A.Agrawal and N.Mittal, Using CNN for facial expression recognition: a study of the effects of kernel size and number of filters on accuracy, in: Visual Computer, 36(2), pp.405-412, 2020.
Downloads
Published
How to Cite
Issue
Section
License
This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.
All papers should be submitted electronically. All submitted manuscripts must be original work that is not under submission at another journal or under consideration for publication in another form, such as a monograph or chapter of a book. Authors of submitted papers are obligated not to submit their paper for publication elsewhere until an editorial decision is rendered on their submission. Further, authors of accepted papers are prohibited from publishing the results in other publications that appear before the paper is published in the Journal unless they receive approval for doing so from the Editor-In-Chief.
IJISAE open access articles are licensed under a Creative Commons Attribution-ShareAlike 4.0 International License. This license lets the audience to give appropriate credit, provide a link to the license, and indicate if changes were made and if they remix, transform, or build upon the material, they must distribute contributions under the same license as the original.