Retracted

Authors

  • Retracted

Abstract

Precise and ongoing surveillance of vital signs is essential for clinical decision-making, prompt intervention, and patient management. Contact-based approaches and wearable devices provide trustworthy readings; nonetheless, they encounter limits in contexts such as newborn care and burn units, as well as problems pertaining to pain, maintenance expenses, and adherence. Noncontact systems, including vision-based and Radio-Frequency (RF) methodologies, provide a possible alternative. These technologies provide vital sign monitoring without physical contact and are garnering increased interest due to advancements in sensor technology, computer vision, and machine learning, with the rising need for remote healthcare solutions. Vision-based systems excel in accurate localization but are affected by lighting conditions, while RF systems are resilient to environmental variables but need subject participation. The unique attributes of vision-based and RF-based approaches have spurred increasing interest in multi-sensory vital sign detection since 2020. This study offers a thorough summary of improvements in vision and RF modalities, together with contemporary multi-sensory techniques, to entice scholars from several disciplines to engage in multimodal fusion research on contactless vital sign detection. We delineate measuring concepts, contrast single-modality and multimodal systems, examine public datasets, assessment criteria, and cutting-edge algorithms. This study identifies the complimentary capabilities of vision and RF systems, addressing significant gaps in multimodal research and outlining future prospects from both practical and technological perspectives.

Downloads

Download data is not yet available.

Author Biography

Retracted

Precise and ongoing surveillance of vital signs is essential for clinical decision-making, prompt intervention, and patient management. Contact-based approaches and wearable devices provide trustworthy readings; nonetheless, they encounter limits in contexts such as newborn care and burn units, as well as problems pertaining to pain, maintenance expenses, and adherence. Noncontact systems, including vision-based and Radio-Frequency (RF) methodologies, provide a possible alternative. These technologies provide vital sign monitoring without physical contact and are garnering increased interest due to advancements in sensor technology, computer vision, and machine learning, with the rising need for remote healthcare solutions. Vision-based systems excel in accurate localization but are affected by lighting conditions, while RF systems are resilient to environmental variables but need subject participation. The unique attributes of vision-based and RF-based approaches have spurred increasing interest in multi-sensory vital sign detection since 2020. This study offers a thorough summary of improvements in vision and RF modalities, together with contemporary multi-sensory techniques, to entice scholars from several disciplines to engage in multimodal fusion research on contactless vital sign detection. We delineate measuring concepts, contrast single-modality and multimodal systems, examine public datasets, assessment criteria, and cutting-edge algorithms. This study identifies the complimentary capabilities of vision and RF systems, addressing significant gaps in multimodal research and outlining future prospects from both practical and technological perspectives.

Published

30.10.2024

How to Cite

Retracted. (2024). Retracted. International Journal of Intelligent Systems and Applications in Engineering, 12(4), 5634 –. Retrieved from https://ijisae.org/index.php/IJISAE/article/view/7488

Issue

Section

Research Article