Optimal Selection of Features for Human Emotion Identification from Face Images
Keywords:
Facial Emotion Recognition, Optimization Algorithms, Feature Reduction, Particle Swarm Optimization, Genetic Algorithms.Abstract
Facial expressions play a powerful role in human communication, serving as a highly effective non-verbal means of conveying emotions during social interactions. While humans naturally excel at interpreting these emotions, teaching machines to recognize facial expressions is a daunting task. The objective of this study is to develop a system capable of replicating human visual perception by harnessing artificial intelligence techniques to analyze input images. Facial expression recognition systems are gaining widespread application in various domains like gaming and the internet, significantly enhancing the efficiency of robots in sectors including military, healthcare, and manufacturing. Nonetheless, the abundance of features in image descriptors poses a substantial challenge for facial emotion recognition systems. Despite numerous attempts to simplify feature complexity, the intricate and diverse nature of facial expressions makes the selection of discriminative features a complex undertaking. In this paper, proposed an effective feature selection method designed to pinpoint and choose informative features from high-dimensional data, with the explicit goal of optimizing classification accuracy. Our approach leverages Particle Swarm Optimization (PSO) to identify valuable feature combinations for classification, using the accuracy determined by the K - Nearest Neighbour (KNN) and Linear Discriminant Analysis classifier (LDA) to assess fitness within the PSO algorithm.
Downloads
References
Peng X L, Zhang H X,Hu Q Y., “Facial/paralysis expression recognition based on multitask learning of deep convolution neural network”, Journal of Northwest University (Natural Science Edition),2019,49 (2): 187-192.
Mumenthaler C,Sander D,Manstead A S R., “Emotion recognition in simulated social interactions”, IEEE Transactions on Affective Computing,2018,11 (2): 308-312.
Zhang T,Zheng W M,Cui Z,et al., “Spatial – temporal recurrent neural network for emotion recognition”, IEEE Transactions on Cybernetics, 2019,49 (3): 839-847.
D'Mello S K,Kory J., “A review and meta analysis of multimodal affect detection systems”, ACM Computing Surveys,2015,47 (3) 43.
Poria S,Cambria E,Bajpai R,et al., “A review of affective computing: from unimodal analysis to multimodal fusion”, Information Fusion,2017,37: 98-125.
Qian Z, Mu J, Zhang J., “Facial expression recognition based on sasnet attention mechanism”, International Conference on Computer Network, Electronic and Automation, Xian: IEEE, 2021: 159-163.
Ding H, Zhou P, Chellappa R., “Occlusion-adaptive deep network for robust facial expression recognition”, IEEE International Joint Conference on Biometrics. Houston: IEEE, 2020: 1-9.
Wang K, Peng X, Yang J, et al., “Region attention networks for pose and occlusion robust facial expression recognition”, IEEE Transactions on Image Processing, 2020, 29: 4057-4069.
Li J, Jin K, Zhou D, et al., “Attention mechanism-based CNN for facial expression recognition”, Neurocomputing, 2020, 411: 340-350.
Tonguc G,Ozayd N,Ozkara B., “Automatic recognition of student emotions from facial expressions during a lecture”, Computers & Education,2020,148:103797.
Gan Y L, Chen J Y, Yang Z K, et al., “Multiple attention network for facial expression recognition”, IEEE Access, 2020, 8: 7383-7393.
Yin P B, Pan W M, Zhang H J., ” Lightweight facial expression recognition method based on convolutional attention”, Laser & Optoelectronics Progress, 2021, 58 (12): 1210023.
Bhattacharya U, Mittal T, Chandra R, et al., “STEP: spatial temporal graph convolutional networks for emotion perception from gaits”, Proceedings of the AAAI Conference on Artificial Intelligence. Palo Alto: AAAI Press, 2020: 1342-1350.
Guo W T, Yang H W, Liu Z Y, et al., “Deep neural networks for depression recognition based on 2D and 3D facial expressions under emotional stimulus tasks”, Frontiers in Neuroscience, 2021, 15:[2022-12-02]. https: doi. org/10.3389/fnins.2021.609760.
Bosch, A. Zisserman, and X. Munoz, “Representing shape witha spatial pyramid kernel,” ACM International Conference on Image and Video Retrieval, 2007.
P.N. Belhumeur, J. Hespanha, and D.J. Kriegman, “Eigenfaces vs. Risherfaces: Recognition Using Class Specific Linear Projection,” In ECCV (1) Conference Proceedings, pages 45-58, 1996.
Downloads
Published
How to Cite
Issue
Section
License
![Creative Commons License](http://i.creativecommons.org/l/by-sa/4.0/88x31.png)
This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.
All papers should be submitted electronically. All submitted manuscripts must be original work that is not under submission at another journal or under consideration for publication in another form, such as a monograph or chapter of a book. Authors of submitted papers are obligated not to submit their paper for publication elsewhere until an editorial decision is rendered on their submission. Further, authors of accepted papers are prohibited from publishing the results in other publications that appear before the paper is published in the Journal unless they receive approval for doing so from the Editor-In-Chief.
IJISAE open access articles are licensed under a Creative Commons Attribution-ShareAlike 4.0 International License. This license lets the audience to give appropriate credit, provide a link to the license, and indicate if changes were made and if they remix, transform, or build upon the material, they must distribute contributions under the same license as the original.