DeepNAVI: A deep learning based smartphone navigation assistant for people with visual impairments
Peer reviewed, Journal article
Published version
Permanent lenke
https://hdl.handle.net/11250/3060246Utgivelsesdato
2022Metadata
Vis full innførselSamlinger
Originalversjon
https://doi.org/10.1016/j.eswa.2022.118720Sammendrag
Navigation assistance is an active research area, where one aim is to foster independent living for people with vision impairments. Despite the fact that many navigation assistants use advanced technologies and methods, we found that they did not explicitly address two essential requirements in a navigation assistant - portability and convenience. It is equally imperative in designing a navigation assistant for the visually impaired that the device is portable and convenient to use without much training. Some navigation assistants do not provide users with detailed information about the obstacle types that can be detected, which is essential to make informed decisions when navigating in real-time. To address these gaps, we propose DeepNAVI, a smartphone-based navigation assistant that leverages deep learning competence. Besides providing information about the type of obstacles present, our system can also provide information about their position, distance from the user, motion status, and scene information. All this information is offered to users through audio mode without compromising portability and convenience. With a small model size and rapid inference time, our navigation assistant can be deployed on a portable device such as a smartphone and work seamlessly in a real-time environment. We conducted a pilot test with a user to assess the usefulness and practicality of the system. Our testing results indicate that our system has the potential to be a practical and useful navigation assistant for the visually impaired.