Visually impaired people are often challenged in the efficient navigation of complex environments. Moreover, helping them navigate intuitively... Show moreVisually impaired people are often challenged in the efficient navigation of complex environments. Moreover, helping them navigate intuitively is not a trivial task. Cognitive maps derived from visual cues play a pivotal role in navigation. In this dissertation, we present a sight-to-sound human–machine interface (STS-HMI), a novel machine vision guidance system that enables visually impaired people to navigate with instantaneous and intuitive responses. This proposed system extracts visual context from scenes and converts them into binaural acoustic cues for users to establish cognitive maps. The development of the proposed STS-HMI system encompasses four major components: (i) a machine vision–based indoor localization system that uses augmented reality (AR) markers to locate the user in GPS-denied environments (e.g., indoor); (ii) a feature-based object detection and localization system called the simultaneous localization and mapping (SLAM) algorithm, which tracks the mobility of users when AR markers are not visible; (iii) a path-planning system that creates a course towards a destination while avoiding obstacles; and (iv) an acoustic human–machine interface to navigate users in complex navigation courses. Throughout the research and development of this dissertation, each component is analyzed for optimal performance. The navigation algorithms are used to evaluate the performance of the STS-HMI system in a complicated environment with difficult navigation paths. The experimental results confirm that the STS-HMI system advances the mobility of visually impaired people with minimal effort and high accuracy. Show less