Revolutionizing Human-Robot Interaction (HRI): Multimodal Intelligent Robotic System for Responsive Collaboration

Authors

  • Babu. G Professor, Department of Biomedical Engineering, SRM Easwari Engineering College, Chennai
  • E. Sathiyanarayanan Assistant Professor, ECE, Madanapalle Institute of Technology & Science, Madanapalle, Andhra Pradesh
  • S. Parasuraman Professor, Electronics and Communication Engineering, Karpaga Vinayaga College of Engineering and Technology
  • Balamurugan. D Associate Professor, Department of Computer Science and Engineering, Sona College of Technology, Salem, Tamilnadu, India.
  • Anitha Jaganathan Assistant Professor, Department of Artificial Intelligence and Data Science, Panimalar Engineering College, Chennai

Keywords:

multimodal, human-robot interaction, robotics, communication, sensors, multi-modal inputs

Abstract

With the advancement of robotics throughout time, human-robot interaction (HRI) is now crucial for providing optimal user experience, reducing tedious activities, and increasing public acceptance of robots. A central aspect of the investigation involves developing context-aware robotic systems that can dynamically adapt to varying environmental conditions and user contexts. By incorporating real-time adaptability into the robotic framework, the research aims to create a more responsive and intuitive human-robot collaboration experience. In order to facilitate the advancement of robots, it is imperative to adopt innovative Human-Robot Interaction (HRI) strategies, with a particular emphasis on fostering a more natural and adaptable mode of interaction. Multimodal HRI, as a recently emerging methodology, provides a means for individuals to engage with robots through diverse modalities, encompassing voice, images, text, eye movement, touch, and even bio-signals such as EEG and ECG. This approach marks a significant shift in HRI paradigms, offering a versatile framework for enhanced communication between humans and robots. In this paper, a Multi-Modal Intelligent Robotic System (MIRS) is proposed, comprising several distinct modules. Leveraging various sensors such as image, sound, and depth, these modules can operate independently or collaboratively to facilitate efficient interaction between humans and robots. Three key components are identified and implemented in this research, which includes the location and posture of the object, information extraction, gesture analysis and eye tracking. Experimental evaluations were conducted to gauge the performance of these interaction interfaces, and the findings underscored the effectiveness of the proposed approach.

Downloads

Download data is not yet available.

References

Tonk, A., Dhabliya, D., Sheril, S., Abbas, A. H., & Dilsora, A. (2023). Intelligent Robotics: Navigation, Planning, and Human-Robot Interaction. In E3S Web of Conferences (Vol. 399, p. 04044). EDP Sciences.

Su, H., Qi, W., Chen, J., Yang, C., Sandoval, J., & Laribi, M. A. (2023). Recent advancements in multimodal human–robot interaction. Frontiers in Neurorobotics, 17, 1084000.

Chandan, K. D. (2023). Bridging the Observability Gap: Augmented Reality Policies for Human Robot Collaboration (Doctoral dissertation, State University of New York at Binghamton).

Tallat, R., Hawbani, A., Wang, X., Al-Dubai, A., Zhao, L., Liu, Z., ... & Alsamhi, S. H. (2023). Navigating Industry 5.0: A Survey of Key Enabling Technologies, Trends, Challenges, and Opportunities. IEEE Communications Surveys & Tutorials.

Frijns, H. A., Schürer, O., & Koeszegi, S. T. (2023). Communication models in human–robot interaction: an asymmetric MODel of ALterity in human–robot interaction (AMODAL-HRI). International Journal of Social Robotics, 15(3), 473-500.

Shafti, A., Orlov, P., & Faisal, A. A. (2019, May). Gaze-based, context-aware robotic system for assisted reaching and grasping. In 2019 International Conference on Robotics and Automation (ICRA) (pp. 863-869). IEEE.

Fang, B., Wei, X., Sun, F., Huang, H., Yu, Y., & Liu, H. (2019). Skill learning for human-robot interaction using wearable device. Tsinghua Science and Technology, 24(6), 654-662.

Zheng, T. W. P., Wang, S. L. L., Wang, T., Zheng, P., Li, S., & Wang, L. Multimodal Human-Robot Interaction for Human-centric Smart Manufacturing: A Survey.

Rautiainen, S., Pantano, M., Traganos, K., Ahmadi, S., Saenz, J., Mohammed, W. M., & Martinez Lastra, J. L. (2022). Multimodal interface for human–robot collaboration. Machines, 10(10), 957.

Abbasi, B., Monaikul, N., Rysbek, Z., Di Eugenio, B., & Žefran, M. (2019, November). A multimodal human-robot interaction manager for assistive robots. In 2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) (pp. 6756-6762). IEEE.

M D, R. ., Kenchannavar, H. H. ., & Kulkarni, U. P. . (2022). Facial Emotion Recognition using Three-Layer ConvNet with Diversity in Data and Minimum Epochs. International Journal of Intelligent Systems and Applications in Engineering, 10(4), 264–268. Retrieved from https://ijisae.org/index.php/IJISAE/article/view/2225

Li, S., Zheng, P., Liu, S., Wang, Z., Wang, X. V., Zheng, L., & Wang, L. (2023). Proactive human–robot collaboration: Mutual-cognitive, predictable, and self-organising perspectives. Robotics and Computer-Integrated Manufacturing, 81, 102510.

Shervedani, A. M., Li, S., Monaikul, N., Abbasi, B., Di Eugenio, B., & Zefran, M. (2023). An End-to-End Human Simulator for Task-Oriented Multimodal Human-Robot Collaboration. arXiv preprint arXiv:2304.00584.

Downloads

Published

07.02.2024

How to Cite

G, B. ., Sathiyanarayanan, E. ., Parasuraman, S. ., D, B. ., & Jaganathan, A. . (2024). Revolutionizing Human-Robot Interaction (HRI): Multimodal Intelligent Robotic System for Responsive Collaboration. International Journal of Intelligent Systems and Applications in Engineering, 12(15s), 47–54. Retrieved from https://ijisae.org/index.php/IJISAE/article/view/4713

Issue

Section

Research Article

Most read articles by the same author(s)