An Iot Machine Learning Approach for Visually Impaired People Walking Indoors and Outdoors

Authors

  • V. S. Saranya Assistant Professor, Department of Computing Technologies, School of Computing, SRM Institute of Science and Technology, Kattankulathur Campus, India
  • Vijaya Krishna Sonthi Assistant Professor, Department of Computer Science and Engineering, Koneru Lakshmaiah Education Foundation, Vaddeswaram, Andhra Pradesh, India
  • Prasanthi Boyapati Assistant Professor, Department of Computer Science and Engineering SRM University Andhra Pradesh, India
  • Boddu L. V. Siva Rama Krishna Assistant Professor, Department of Computer Science and Engineering SRM University Andhra Pradesh, India
  • Ganesh Naidu Ummadisetti Department of CSBS, Assistant professor, B V Raju Institute Of Technology, Narsapur, Medak, Telangana, India
  • P. V. Naresh Assistant Professor, Department of Information Technology, Malla Reddy College of Engineering and Technology, Maisammaguda, Hyderabad, Telangana, India

Keywords:

Machine learning, Object Detection, Yolo, Visually Impaired People

Abstract

This article describes the architecture and system design for assisting blind people in navigating freely inside an enclosed environment, such as the home or the outdoors. Thus, the proposed technology uses IoT technology and emerging techniques for machine learning to provide high-tech cane functionality that allows visually impaired navigators to walk independently. It also includes mobile applications to safeguard visually impaired persons and allow guardians to observe them. The proposed in this study system is intended to identify and classify any obstacles within a defined distance using machine learning. In this connection, an indoor and outdoor architecture on YOLO v3 is implemented for its detection technique, and multi-layer perceptron (MLP) neural network technology supports this framework. Based on the detection and classification, YOLO v3 and MLP are crucial for their accuracy.

Downloads

Download data is not yet available.

References

World Report on Vision. World Health Organization. 2019. Available online: https://www.who.int/publications/i/item/978924 1516570 (accessed on 7 May 2022).

Bourne, R.; Steinmetz, J.D.; Flaxman, S.; Briant, P.S.; Taylor, H.R.; Casson, R.; Bikbov, M.; Bottone, M.; Braithwaite, T.; Bron, A.M.; et al. Trends in prevalence of blindness and distance and near vision impairment over 30 years: An analysis for the Global Burden of Disease Study. Lancet Glob. Health 2021, 9, e130–e143.

Wafa Elmannai and Khaled Elleithy. Sensor-based assistive devices for visually-impaired people: Current status, challenges, and future directions. Sensors, 17(3), 2017

A. S. Al-Fahoum, H. B. Al-Hmoud, and A. A. Al-Fraihat. A smart infrared microcontroller-based blind guidance system. Active and Passive Electronic Components, 2013(726480), 2013.

B. Mustapha, A. Zayegh, and R. K. Begg. Ultrasonic and infrared sensors performance in a wireless obstacle detection system. In 2013 1st International Conference on Artificial Intelligence, Modelling and Simulation, pages 487–492, Dec 2013.

D. Dakopoulos and N. G. Bourbakis, “Wearable obstacle avoidance electronic travel aids for blind: A survey,” IEEE Trans. Syst., Man, Cybern. C, Appl. Rev., vol. 40, no. 1, pp. 25–35, Jan. 2010.

W. Elmannai and K. Elleithy, “Sensor-based assistive devices for visually-impaired people: Current status, challenges, and future directions,” Sensors, vol. 17, no. 3, pp. 565–606, Mar. 2017.

A. Bhowmick and S. M. Hazarika, “An insight into assistive technology for the visually impaired and blind people: State-of-the-art and future trends,” J. Multimodal User Interfaces, vol. 11, no. 2, pp. 149–172, Jan. 2017.

H. Fernandes, P. Costa, V. Filipe, H. Paredes, and J. Barroso, “A review of assistive spatial orientation and navigation technologies for the visually impaired,” Univ. Access Inf. Soc., vol. 2017, pp. 1–14, Aug. 2017.

H. Zhang and C. Ye, “An indoor wayfinding system based on geometric features aided graph SLAM for the visually impaired,” IEEE Trans. Neural Syst. Rehabil. Eng., vol. 25, no. 9, pp. 1592–1604, Sep. 2017.

R. Jafri, R. L. Campos, S. A. Ali, and H. R. Arabnia, “Visual and infrared sensor data-based obstacle detection for the visually impaired using the Google project tango tablet development kit and the unity engine,” IEEE Access, vol. 6, pp. 443–454, 2018.

I. Ulrich and J. Borenstein, “The GuideCane-applying mobile robot technologies to assist the visually impaired,” IEEE Trans. Syst., Man, Cybern. A, Syst., Humans, vol. 31, no. 2, pp. 131–136, Mar. 2001.

F. Penizzotto, E. Slawinski, and V. Mut, “Laser radar based autonomous mobile robot guidance system for olive groves navigation,” IEEE Latin Amer. Trans., vol. 13, no. 5, pp. 1303–1312, May 2015.

Y. H. Lee and G. Medioni, “Wearable RGBD indoor navigation system for the blind,” in Proc. ECCV Workshops, Zürich, Switzerland, 2014, pp. 493–508.

Tirlangi, Ram & Sankar, Ch. (2016). Electronic Travel Aid for Visually Impaired People based on Computer Vision and Sensor Nodes using Raspberry Pi. Indian Journal of Science and Technology. 9. 10.17485/ijst/2015/v8i1/106850.

Ackland, P.; Resnikoff, S.; Bourne, R. World blindness and visual impairment: Despite many successes, the problem is growing. Community Eye Health 2017, 30, 71.

Kumar, A. & Patra, Rusha & Mahadevappa, Manjunatha & Mukhopadhyay, Jimut & Majumdar, Arun. (2011). An electronic travel aid for navigation of visually impaired persons. 2011 3rd International Conference on Communication Systems and Networks, COMSNETS 2011. 1 - 5. 10.1109/COMSNETS.2011.5716517.

Haigh A., Brown D.J., Meijer P., Proulx M.J. How well do you see what you hear? The acuity of visual-to-auditory sensory substitution. Front. Psychol. 2013;4 doi: 10.3389/fpsyg.2013.00330.

Vance Landford, “Electronic travel aids ETAs, past and present,” slides, TAER April 2004.

A. Awada, Y. B. Issa, C. Ghannam, J. Tekli, and R. Chbeir, “Towards Digital Image Accessibility for Blind Users Via Vibrating Touch Screen: A Feasibility Test Protocol,” presented at the Signal Image Technology and Internet Based Systems (SITIS), 2012 Eighth International Conference on, 2012, pp. 547–554.

L.-C. Chen, G. Papandreou, S. Member, I. Kokkinos, K. Murphy, and A. L. Yuille, “DeepLab: Semantic Image Segmentation with Deep Convolutional Nets, Atrous Convolution, and Fully Connected CRFs.” arXiv:1606.00915v2, 2017.

S. Ren, K. He, R. Girshick, and J. Sun. Faster R-CNN: Towards real-time object detection with region proposal networks. In Advances in neural information processing systems, pages91–99, 2015.

J. Redmon, S. Divvala, R. Girshick, and A. Farhadi. You Only Look Once: Unified, real-time object detection. arXiv:1506.02640, 2015.

Downloads

Published

30.11.2023

How to Cite

Saranya, V. S. ., Sonthi, V. K. ., Boyapati, P. ., Krishna, B. L. V. S. R. ., Ummadisetti, G. N. ., & Naresh, P. V. . (2023). An Iot Machine Learning Approach for Visually Impaired People Walking Indoors and Outdoors. International Journal of Intelligent Systems and Applications in Engineering, 12(6s), 121–129. Retrieved from https://ijisae.org/index.php/IJISAE/article/view/3964

Issue

Section

Research Article