Real-Time Driver Sentiment Analysis Using Hybrid Deep Learning Algorithm
Keywords:
Deep neural networks, facial emotion recognition, Driver sentiment analysis, Road accidents, Emotions, FER DatasetAbstract
Everyday, over 1.3 million people are killed in traffic accidents around the world. The vast majority of these accidents are caused by human error which are caused by many things. Traffic mistakes are usually only caused by drivers who are upset while they are behind the wheel. This can result in bad driving, risky manoeuvres, or even crashes in the worst cases. There are a variety of solutions to this problem which can help people prevent these risky maneuvers and traffic accidents, but the best and most effective one is to keep the driver awake and in a safe driving condition which is the main focus of the project. In the author’s framework, this research paper identifies the drivers face in the current frame at regular intervals and recognizes the drivers' emotions by snapping a photograph of the driver and using image processing techniques to extract symptoms of different emotions. The author finetune a typical pre-trained deep neural network model, CNN, on facial expression data to extract characteristics from the image of the face which is captured in a car for expression identification. The author intends to utilise the following algorithms in this research paper: VGG16, AlexNet, and VGG19. Among these, VGG16 is a widely used and straightforward Convolutional Neural Network (CNN) Architecture for ImageNet, a substantial visual database project utilised in software research for visual object recognition, boasting an accuracy of 96.63%. Furthermore, the user is permitted to load a pre-trained version of the network into VGG19.
Downloads
References
N.Sebe, M.S.Lew, Y.Sun, I.Cohen, T.Gevers, T.S.Huang, “Authentic Facial Expression Analysis”,Volume. 25,Issue. 12, pp. 1856-1863,2007.
P.Burket, F.Trier, M.Z.Afzal, A.Dengel, M.Liwicki, “DeXpression:Deep Convolutional neural Network for Expression Recognition”, arXiv:1509.05371v2[cs.CV].
E.A.Clark, J.Kessinger, S.E.Duncan, M.A.Bell, J.Lahne, D.L.Gallagher, S.F.O’Keefe, “The Facial Action Coding System for Characterization of Human Affective Response to Consumer Product-Based Stimuli:A Systematic Review”, Sec.Emotion Science, 2020.
W.Mellouk, W.handouzi, “Recognizing Facial Expressions Using Deep Learning”, Volume 175, pp. 689-694, 2020.
W.Wei, Q.Jia, Y.Feng, G.Chen, “Emotion recognition Based on Weighted Fusion Strategy of Multichannel Physiological Signals”, Volume 2018, Article ID 5296523.
N.Saffaryazdi, S.T.Wasim, K.Dileep, A.F.Nia, S.Nanayakkara, E.Broadbent, M.Billinghurst, “Using Facial Micro-Expressions in Combination With EEG and Physiological Signals for Emotion Recognition”, PMCID: PMC9275379, PMID: 35837650.
S-Woo.Byun, S-Peok.Lee, “Human emotion recognition based on the weighted integration method using image sequences and acoustic features”, Multimed Tools Appl 80, 35871-35885(2021).
Q.Ji, “Probabilistic Graphic Models for Computer Vision “, 5.4.1 Facial expression recognition, 2020.
H.Xu, L.Dai, J.Fu, X.Wang, Q.Wang, “High-Quality Real Time Facial Capture Based on Single Camera”, arXiv:2111.07556[cs.CV].
G.M.Smith, “What is ADAS(Advanced Driver Assistance System)?”, 30.Aug 2021.
Iglesias, Iñaki. (2013). “Vehicle modelling for real time systems application. The virtual rolling chassis.” Dyna (Bilbao). 88. 206-215. 10.6036/5177.
Thimbleby, Harold. “Technology and the future of healthcare.” Journal of public health research vol. 2,3 e28. 1 Dec. 2013, doi:10.4081/jphr.2013.e28.
S. Deshmukh, M. Patwardhan, and A. Mahajan, “Survey on Real- Time Facial Expression Recognition Techniques,” IET Biom., pp. 1- 9, 2015.
Z. Kowalczuk ∗ M. Czubenko ∗ T. Merta ∗, Emotion monitoring system for drivers, ∗ Gdańsk University of Technology, Narutowicza ,11/12 ,80-233 Gdańsk.
Y. Gao and K.H. Leung,“Face recognition using line edge map,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 24, no. 6, June 2002.
N. Peppes, T.Alexakis, E.Adamopoulou, K.Demestichas,”Driving Behaviour Analysis Using Machine and Deep Learning Methods for Continuous Streams of Vehicular Data”,Sensors. 2021; 21(14):4704. https://doi.org/10.3390/s21144704.
R. Kala,”4-Advance Driver Assistance Systems”,On-Road Intelligent Vehicles,Pages 59-82,2016.
Huafei Xiao,Wenbo Li,Guanzhong Zeng,Yingzhang Wu,Jiyong Xue,Juncheng Zhang, Chengmou Li,Gang Guo, On-Road Driver Emotion Recognition Using Facial Expression, Topic Intelligent Transportation Systems,13 January 2022.
D. A. Pitaloka, A. Wulandari, T. Basaruddin, and D. Y. Liliana, “Enhancing CNN with Preprocessing Stage in Automatic Emotion Recognition,” Procedia Comput. Sci., vol. 116, pp. 523–529, 2017.
P. Hespanha, D. J. Kriegman, and P. N. Belhumeur, “Eigenfaces vs. Fisherfaces : Recognition Using Class Specific Linear Projection,” vol. 19, no. 7, pp. 711–720, 1997.
Chandra Bhushan Singh, Babu Sarkar,Pushpendra Yadav, Facial Expression Recognition,SSRN(May 25,2021).
Z. Kowalczuk ∗ M. Czubenko ∗ T. Merta ∗, Emotion monitoring system for drivers, ∗ Gdańsk University of Technology, Narutowicza ,11/12 ,80-233 Gdańsk.
Bindu Verma and Ayesha Choudhary,A Framework for Driver Emotion Recognition using Deep Learning and Grassmann Manifolds, IEEE International Conference on Intelligent Transportation Systems (ITSC), November 2018. (“Identification System Based on Resolution Adjusted 2D Spectrogram of ...”)
Huafei Xiao,Wenbo Li,Guanzhong Zeng,Yingzhang Wu,Jiyong Xue,Juncheng Zhang, Chengmou Li,Gang Guo, On-Road Driver Emotion Recognition Using Facial Expression, Topic Intelligent Transportation Systems,13 January 2022.
Tamanani, R.; Muresan, R.; Al-Dweik, A. Estimation of Driver Vigilance Status Using Real-Time Facial Expression and Deep Learning. IEEE Sens. Lett. 2021, 5, 1–4. [CrossRef]
Abtahi, S.; Omidyeganeh, M.; Shirmohammadi, S.; Hariri, B. YawDD: Yawning Detection Dataset. IEEE DataPort 2020. [CrossRef]
Lorente, M.P.S.; Lopez, E.M.; Florez, L.A.; Espino, A.L.; Martínez, J.A.I.; de Mi-guel, A.S. Explaining Deep Learning-Based Driver Models. Appl. Sci. 2021, 11, 3321. [CrossRef]
Kerinab Beenu, G., Pavithra, J., A study on the prospective consumer’s perception towards utility cars in Chennai city, International Journal of Applied Engineering Research, V-9, I-22, PP:7526-7531, 2014.
Mane, D., Bidwe, R., Zope, B., Ranjan, N. (2022). Traffic Density Classification for Multiclass Vehicles Using Customized Convolutional Neural Network for Smart City. In: Sharma, H., Shrivastava, V., Kumari Bharti, K., Wang, L. (eds) Communication and Intelligent Systems . Lecture Notes in Networks and Systems, vol 461. Springer, Singapore. https://doi.org/10.1007/978-981-19-2130-8_78
Bhujbal, Avinash, and Deepak Mane. "A survey on deep learning approaches for vehicle and number plate detection." Int. J. Sci. Technol. Res 8.12 (2019): 1378-1383.
Khetani, Vinit, et al. "Cross-Domain Analysis of ML and DL: Evaluating their Impact in Diverse Domains." International Journal of Intelligent Systems and Applications in Engineering 11.7s (2023): 253-262.
Raghavendra, S., Dhabliya, D., Mondal, D., Omarov, B., Sankaran, K.S., Dhablia, A., Chaudhury, S., Shabaz, M. Retracted: Development of intrusion detection system using machine learning for the analytics of Internet of Things enabled enterprises (2023) IET Communications, 17 (13), pp. 1619-1625.
Downloads
Published
How to Cite
Issue
Section
License
This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.
All papers should be submitted electronically. All submitted manuscripts must be original work that is not under submission at another journal or under consideration for publication in another form, such as a monograph or chapter of a book. Authors of submitted papers are obligated not to submit their paper for publication elsewhere until an editorial decision is rendered on their submission. Further, authors of accepted papers are prohibited from publishing the results in other publications that appear before the paper is published in the Journal unless they receive approval for doing so from the Editor-In-Chief.
IJISAE open access articles are licensed under a Creative Commons Attribution-ShareAlike 4.0 International License. This license lets the audience to give appropriate credit, provide a link to the license, and indicate if changes were made and if they remix, transform, or build upon the material, they must distribute contributions under the same license as the original.