Hybrid CNN-RF Algorithm for Facial Expression Recognition using Different Datasets
Keywords:
Hybrid CNN-RF, FER, MTCNN, Lucy-Richardson, Contrast stretchingAbstract
Facial emotions have gained a significant importance in the area of computer vision as one can naturally express feeling non-verbally using facial expressions. A human can express a many emotion, like angry, disgust, happy, afraid, sad, neutral and surprised each emotion has a set of components. Using various deep learning techniques, many researchers have been researching in this area for facial recognition. According to existing researches, emotions may vary for trained datasets and also change in image characteristics due to high shutter speed or real time video may occur. These changes may lead to incorrect result in Facial emotion recognition. To overcome this issue, new approach of a hybrid model is proposed for Facial Emotion Recognition (FER). A hybrid model using CNN and RF Classifier with MTCNN is designed to get better performance. The collection of data is the first step in recognising and classifying face expressions in this model. These data are pre-processed for removing the unnecessary data from the raw data and are extracted for features with the MTCNN (Multi-task Cascaded Convolutional Networks) by segmenting and localizing the image. CNN-RF hybrid architecture takes extracted features as input, where CNN architecture is used in the process of feature extraction and RF is used as a classifier. The performance metrics of the proposed architecture varies for different datasets but the overall percentage for accuracy, sensitivity, precision and error of FER are 88.5%, 88%, 82%, 11.5%. Thus, the designed model instantly recognizes the facial expression and classifies the gender in effective manner.
Downloads
References
Li, X., Yang, Z., & Wu, H. (2020). Face detection based on receptive field enhanced multi-task cascaded convolutional neural networks. IEEE Access, 8, 174922-174930.
Wang, K., Peng, X., Yang, J., Meng, D., & Qiao, Y. (2020). Region attention networks for pose and occlusion robust facial expression recognition. IEEE Transactions on Image Processing, 29, 4057-4069.
Fan, Y., Lam, J. C., & Li, V. O. (2018, October). Multi-region ensemble convolutional neural network for facial expression recognition. In International Conference on Artificial Neural Networks (pp. 84-94). Springer, Cham.
Li, J., Jin, K., Zhou, D., Kubota, N., & Ju, Z. (2020). Attention mechanism-based CNN for facial expression recognition. Neurocomputing, 411, 340-350.
Vo, T. H., Lee, G. S., Yang, H. J., & Kim, S. H. (2020). Pyramid with super resolution for in-the-wild facial expression recognition. IEEE Access, 8, 131988-132001.
Chen, Y., Wang, J., Chen, S., Shi, Z., & Cai, J. (2019, December). Facial motion prior networks for facial expression recognition. In 2019 IEEE Visual Communications and Image Processing (VCIP) (pp. 1-4). IEEE.
Li, M., Xu, H., Huang, X., Song, Z., Liu, X., & Li, X. (2018). Facial expression recognition with identity and emotion joint learning. IEEE Transactions on affective computing, 12(2), 544-550.
Porcu, S., Floris, A., & Atzori, L. (2020). Evaluation of data augmentation techniques for facial expression recognition systems. Electronics, 9(11), 1892.
Zhang, H., Huang, B., & Tian, G. (2020). Facial expression recognition based on deep convolution long short-term memory networks of double-channel weighted mixture. Pattern Recognition Letters, 131, 128-134.
Li, H., & Xu, H. (2020). Deep reinforcement learning for robust emotional classification in facial expression recognition. Knowledge-Based Systems, 204, 106172.
Minaee, S., Minaei, M., & Abdolrashidi, A. (2021). Deep-emotion: Facial expression recognition using attentional convolutional network. Sensors, 21(9), 3046.
Kim, J. H., Kim, B. G., Roy, P. P., & Jeong, D. M. (2019). Efficient facial expression recognition algorithm based on hierarchical deep neural network structure. IEEE access, 7, 41273-41285.
Georgescu, M. I., Ionescu, R. T., & Popescu, M. (2019). Local learning with deep and handcrafted features for facial expression recognition. IEEE Access, 7, 64827-64836.
Zhang, S., Pan, X., Cui, Y., Zhao, X., & Liu, L. (2019). Learning affective video features for facial expression recognition via hybrid deep learning. IEEE Access, 7, 32297-32304.
Fan, Y., Li, V., & Lam, J. C. (2020). Facial expression recognition with deeply-supervised attention network. IEEE transactions on affective computing.
Sarvajeet A. Bhosale and Dr. Sangeeta R.Chougule(2023). A Review on Face Emotion Recognition using EEG Features and Facial Features. 2023 1st International Conference on Cognitive Computing and Engineering Education (ICCCEE), MIT ADT University Pune, India. Apr 27-29, 2023.
Inampudi, S., Vani, S., & TB, R. (2019). Image Restoration using Non-Blind Deconvolution Approach–A Comparison. International Journal of Electronics and Communication Engineering and Technology, 10(1).
Asokan, A., Popescu, D. E., Anitha, J., & Hemanth, D. J. (2020). Bat algorithm based non-linear contrast stretching for satellite image enhancement. geosciences, 10(2), 78.
Xie, Y., Wang, H., & Guo, S. (2020). Research on MTCNN Face Recognition System in Low Computing Power Scenarios. Journal of Internet Technology, 21(5), 1463-1475.
Sun, Y., Zhang, H., Zhao, T., Zou, Z., Shen, B., & Yang, L. (2020). A new convolutional neural network with random forest method for hydrogen sensor fault diagnosis. IEEE Access, 8, 85421-85430.
https://www.kaggle.com/shawon10/ckplus
https://www.kaggle.com/tom99763/kdef-512x512-super-resolution-colorized
C. Li-Fen and Y. Yu-Shiuan, “Taiwanese facial expression image database,” Brain Mapping laboratory, Institute of Brain Science, National Yang-Ming University, Taipei, Taiwan, Tech. Rep., 2007.
KMU-FED. Accessed: Feb. 13, 2020. [Online]. Available: http://cvpr.kmu.ac.kr/KMU-FED.html
Downloads
Published
How to Cite
Issue
Section
License
![Creative Commons License](http://i.creativecommons.org/l/by-sa/4.0/88x31.png)
This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.
All papers should be submitted electronically. All submitted manuscripts must be original work that is not under submission at another journal or under consideration for publication in another form, such as a monograph or chapter of a book. Authors of submitted papers are obligated not to submit their paper for publication elsewhere until an editorial decision is rendered on their submission. Further, authors of accepted papers are prohibited from publishing the results in other publications that appear before the paper is published in the Journal unless they receive approval for doing so from the Editor-In-Chief.
IJISAE open access articles are licensed under a Creative Commons Attribution-ShareAlike 4.0 International License. This license lets the audience to give appropriate credit, provide a link to the license, and indicate if changes were made and if they remix, transform, or build upon the material, they must distribute contributions under the same license as the original.