Enhanced Blind Navigation using YOLO and Sensor Fusion
Abstract
Blind navigation remains a significant challenge for visually impaired individuals, particularly in complex and dynamic environments such as crowded streets, public transport, and indoor spaces. Traditional mobility aids like canes and guide dogs offer assistance but have limitations in detecting fast-moving obstacles or recognizing objects beyond immediate reach. With advancements in artificial intelligence (AI) and sensor technologies, there is an opportunity to develop smarter, real-time navigation solutions that enhance mobility and independence for visually impaired individuals.
Blind navigation is a critical challenge for visually impaired individuals, requiring accurate and real-time obstacle detection to ensure safe mobility. This paper presents an advanced assistive system integrating the You Only Look Once (YOLO) object detection model with ultrasonic sensors and voice feedback for efficient blind navigation. Our approach enhances object detection accuracy and provides real-time obstacle warnings. Experiments demonstrate the system’s effectiveness in detecting objects at varying distances and guiding users safely in dynamic environments.
By integrating AI-driven object detection with sensorbased obstacle detection, this assistive technology aims to empower visually impaired individuals with greater autonomy and safety in their navigation, bridging the gap between traditional mobility aids and modern intelligent systems.
References
Tejasvee Bisen, Mohammed Javed, P. Nagabhushan, and Osamu Watanabe. Segmentation-less extraction of text and nontext regions from jpeg 2000 compressed document images through partial and intelligent decompression. volume 11, pages 20673–20687, 2023.
Hechun Chen, Shilei Tan, Zeyu Xie, and Zhi Liu. A new method based on yolov5 for remote sensing object detection. In 2022 China Automation Congress (CAC), pages 605–610, 2022.
Enzeng Dong, Yue Zhang, and Shengzhi Du. An automatic object detection and tracking method based on video surveillance. In 2020 IEEE International Conference on Mechatronics and Automation (ICMA), pages 1140–1144, 2020.
Xuhui Hu, Aiguo Song, Zhikai Wei, and Hong Zeng. Stereopilot: A wearable target location system for blind and visually impaired using spatial audio rendering. volume 30, pages 1621–1630, 2022.
Mats Jonasson, Asa˚ Rogenfelt, Charlotte Lanfelt, Jonas Fredriksson, and Martin Hassel. Inertial navigation and position uncertainty during a blind safe stop of an autonomous vehicle. volume 69, pages 4788–4802, 2020.
Jung-Hun Kim, Ji-Eun Park, and Jong-Min Lee. 3-d space visualization system using ultrasonic sensors as an assistive device for the blind. volume 8, pages 1–5, 2020.
Ainampudi Kumari Sirivarshitha, Kadavakollu Sravani, Kothamasu Santhi Priya, and Vasantha Bhavani. An approach for face detection and face recognition using opencv and face recognition libraries in python. In 2023 9th International Conference on Advanced Computing and Communication Systems (ICACCS), volume 1, pages 1274–1278, 2023.
Guofa Li, Wenqiang Fan, Heng Xie, and Xingda Qu. Detection of road objects based on camera sensors for autonomous driving in various traffic situations. volume 22, pages 24253– 24263, 2022.
M. Maheswari, Haider Alabdeli, Aruna T M,
R. Archana Reddy, and Revathi. R. A classification of face recognition system using faster region-based convolutional neural network. In 2024 International Conference on Data Science and Network Security (ICDSNS), pages 1–4, 2024.
Usman Masud, Tareq Saeed, Hunida M. Malaikah, Fezan Ul Islam, and Ghulam Abbas. Smart assistive system for visually impaired people obstruction avoidance through object detection and classification. volume 10, pages 13428–13441, 2022.
Refbacks
- There are currently no refbacks.