Research on Intelligent Vehicle Driving Behaviour Analysis and Driver State Evaluation Based on Emotion Recognition

Authors

  • Chaoyang Zhu Institute for Social Innovation and Public Culture, Communication University of China, Beijing, 100024, China

Keywords:

Intelligent Driving, Autonomous Driving, Optimization, Classification, Subset Model, Spider Swarm

Abstract

Intelligent driving, also known as autonomous driving or self-driving, refers to the technology and systems that enable vehicles to operate without direct human intervention. Autonomous driving or self-driving, presents several significant challenges that need to be addressed to ensure the safe and widespread adoption of this transformative technology. This paper presents a novel approach, Optimal Subset Spider Monkey Swarm Optimization (OsSMSO), for behavior analysis in intelligent driving with emotional intelligence (EI). The primary goal of OsSMSO is to identify the optimal subset of driving behaviors that can be effectively enhanced by integrating emotional intelligence into the decision-making process of intelligent vehicles. The OsSMSO algorithm with spider monkey swarm optimization as the underlying optimization technique, with each spider monkey representing a potential subset of driving behaviors influenced by emotional intelligence. Emotional intelligence models are integrated into the evaluation process to assess the impact of emotions such as stress, fatigue, happiness, and anger on driving behaviors. Through multiple runs of OsSMSO, the most effective combinations of driving behaviors are identified, considering factors like safety, efficiency, and comfort. The proposed approach is compared with traditional models such as Support Vector Machine (SVM) and Random Forest, and the results demonstrate its superiority in achieving higher classification accuracy, precision, recall, and F1-score. The findings highlight the significance of integrating emotional intelligence features in intelligent vehicle systems, providing valuable insights for designing emotionally-aware autonomous vehicles for safer and more enjoyable driving experiences. Further validation and experimentation on diverse datasets and driving scenarios will be essential to establish the generalizability and effectiveness of the OsSMSO algorithm.

Downloads

Download data is not yet available.

References

Maithri, M., Raghavendra, U., Gudigar, A., Samanth, J., Barua, P. D., Murugappan, M., ... & Acharya, U. R. (2022). Automated emotion recognition: Current trends and future perspectives. Computer methods and programs in biomedicine, 215, 106646.

Deng, J., & Ren, F. (2021). A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing.

Khaireddin, Y., & Chen, Z. (2021). Facial emotion recognition: State of the art performance on FER2013. arXiv preprint arXiv:2105.03588.

Abdullah, S. M. S. A., Ameen, S. Y. A., Sadeeq, M. A., & Zeebaree, S. (2021). Multimodal emotion recognition using deep learning. Journal of Applied Science and Technology Trends, 2(02), 52-58.

Abbaschian, B. J., Sierra-Sosa, D., & Elmaghraby, A. (2021). Deep learning techniques for speech emotion recognition, from databases to models. Sensors, 21(4), 1249.

Zhao, S., Jia, G., Yang, J., Ding, G., & Keutzer, K. (2021). Emotion recognition from multiple modalities: Fundamentals and methodologies. IEEE Signal Processing Magazine, 38(6), 59-73.

Pepino, L., Riera, P., & Ferrer, L. (2021). Emotion recognition from speech using wav2vec 2.0 embeddings. arXiv preprint arXiv:2104.03502.

Hu, D., Wei, L., & Huai, X. (2021). Dialoguecrn: Contextual reasoning networks for emotion recognition in conversations. arXiv preprint arXiv:2106.01978.

Hasnul, M. A., Aziz, N. A. A., Alelyani, S., Mohana, M., & Aziz, A. A. (2021). Electrocardiogram-based emotion recognition systems and their applications in healthcare—A review. Sensors, 21(15), 5015.

Schoneveld, L., Othmani, A., & Abdelkawy, H. (2021). Leveraging recent advances in deep learning for audio-visual emotion recognition. Pattern Recognition Letters, 146, 1-7.

Lian, Z., Liu, B., & Tao, J. (2021). CTNet: Conversational transformer network for emotion recognition. IEEE/ACM Transactions on Audio, Speech, and Language Processing, 29, 985-1000.

Pandey, P., & Seeja, K. R. (2022). Subject independent emotion recognition from EEG using VMD and deep learning. Journal of King Saud University-Computer and Information Sciences, 34(5), 1730-1738.

Fahad, M. S., Ranjan, A., Yadav, J., & Deepak, A. (2021). A survey of speech emotion recognition in natural environment. Digital signal processing, 110, 102951.

Shen, W., Chen, J., Quan, X., & Xie, Z. (2021, May). Dialogxl: All-in-one xlnet for multi-party conversation emotion recognition. In Proceedings of the AAAI Conference on Artificial Intelligence (Vol. 35, No. 15, pp. 13789-13797).

Wells, A. E., Hunnikin, L. M., Ash, D. P., & Van Goozen, S. H. (2021). Improving emotion recognition is associated with subsequent mental health and well-being in children with severe behavioural problems. European child & adolescent psychiatry, 30, 1769-1777.

Kumaran, U., Radha Rammohan, S., Nagarajan, S. M., & Prathik, A. (2021). Fusion of mel and gammatone frequency cepstral coefficients for speech emotion recognition using deep C-RNN. International Journal of Speech Technology, 24, 303-314.

Krumhuber, E. G., Küster, D., Namba, S., Shah, D., & Calvo, M. G. (2021). Emotion recognition from posed and spontaneous dynamic expressions: Human observers versus machine analysis. Emotion, 21(2), 447.

Boz, H., & Koc, E. (2021). Service quality, emotion recognition, emotional intelligence and Dunning Kruger syndrome. Total Quality Management & Business Excellence, 32(11-12), 1201-1214.

Jemioło, P., Storman, D., Mamica, M., Szymkowski, M., Żabicka, W., Wojtaszek-Główka, M., & Ligęza, A. (2022). Datasets for Automated Affect and Emotion Recognition from Cardiovascular Signals Using Artificial Intelligence—A Systematic Review. Sensors, 22(7), 2538.

Chen, L., Li, Y., Huang, C., Li, B., Xing, Y., Tian, D., ... & Wang, F. Y. (2022). Milestones in autonomous driving and intelligent vehicles: Survey of surveys. IEEE Transactions on Intelligent Vehicles, 8(2), 1046-1056.

Hu, Z., Lou, S., Xing, Y., Wang, X., Cao, D., & Lv, C. (2022). Review and perspectives on driver digital twin and its enabling technologies for intelligent vehicles. IEEE Transactions on Intelligent Vehicles.

Zheng, X., Huang, H., Wang, J., Zhao, X., & Xu, Q. (2021). Behavioral decision‐making model of the intelligent vehicle based on driving risk assessment. Computer‐Aided Civil and Infrastructure Engineering, 36(7), 820-837.

Hu, Z., Xing, Y., Gu, W., Cao, D., & Lv, C. (2022). Driver anomaly quantification for intelligent vehicles: A contrastive learning approach with representation clustering. IEEE Transactions on Intelligent Vehicles, 8(1), 37-47.

Wu, J., Kong, Q., Yang, K., Liu, Y., Cao, D., & Li, Z. (2022). Research on the steering torque control for intelligent vehicles co-driving with the penalty factor of human–machine intervention. IEEE Transactions on Systems, Man, and Cybernetics: Systems, 53(1), 59-70.

Xing, Y., Lv, C., Cao, D., & Velenis, E. (2021). Multi-scale driver behavior modeling based on deep spatial-temporal representation for intelligent vehicles. Transportation research part C: emerging technologies, 130, 103288.

Li, W., Cui, Y., Ma, Y., Chen, X., Li, G., Zeng, G., ... & Cao, D. (2021). A spontaneous driver emotion facial expression (defe) dataset for intelligent vehicles: Emotions triggered by video-audio clips in driving scenarios. IEEE Transactions on Affective Computing.

Downloads

Published

30.11.2023

How to Cite

Zhu , C. . (2023). Research on Intelligent Vehicle Driving Behaviour Analysis and Driver State Evaluation Based on Emotion Recognition. International Journal of Intelligent Systems and Applications in Engineering, 12(6s), 264–280. Retrieved from https://ijisae.org/index.php/IJISAE/article/view/3976

Issue

Section

Research Article