Search results
(1 - 2 of 2)
- Title
- VIDEO FEATURE DETECTION AND MATCHING FOR STRUCTURE FROM MOTION SYSTEM
- Creator
- Yang, Guojun
- Date
- 2015, 2015-05
- Description
-
With the improvements in sensor technologies and image processing algorithms, computer vision has become a major tool for robots to recognize...
Show moreWith the improvements in sensor technologies and image processing algorithms, computer vision has become a major tool for robots to recognize and gauge their surroundings. For instance, the Kinect sensor can be used as an excellent depth camera for indoor navigation. However, there exist situations that need recognition and spatial interpretation of the environment using limited hardware resources. The Kinect is not suitable for outdoor use, while LIDAR is too large and expensive to be installed on an autonomous miniature surveillance drone. Therefore, the use of a single camera is the only feasible option for many embedded applications. To perform SfM (structure from motion) by using single camera is challenging due to the complexity of 3D mapping. Feature detection and mapping is the very fist step to perform SfM. To be more specific, matched feature points are used as anchors cross images or frames. Without such matched feature points, most SfM method will not be able to generate reliable results; moreover, instead of using frames from videos as inputs, most feature detectors and matching strategies are designed for SfM applications using images as inputs. Therefore, this thesis will discuss how to detect feature points from video and match them effectively. Image projection and SfM fundamentals will be introduced in this thesis as well.
M.S. in Electrical Engineering, May 2015
Show less
- Title
- MACHINE VISION NAVIGATION SYSTEM FOR VISUALLY IMPAIRED PEOPLE
- Creator
- Yang, Guojun
- Date
- 2021
- Description
-
Visually impaired people are often challenged in the efficient navigation of complex environments. Moreover, helping them navigate intuitively...
Show moreVisually impaired people are often challenged in the efficient navigation of complex environments. Moreover, helping them navigate intuitively is not a trivial task. Cognitive maps derived from visual cues play a pivotal role in navigation. In this dissertation, we present a sight-to-sound human–machine interface (STS-HMI), a novel machine vision guidance system that enables visually impaired people to navigate with instantaneous and intuitive responses. This proposed system extracts visual context from scenes and converts them into binaural acoustic cues for users to establish cognitive maps. The development of the proposed STS-HMI system encompasses four major components: (i) a machine vision–based indoor localization system that uses augmented reality (AR) markers to locate the user in GPS-denied environments (e.g., indoor); (ii) a feature-based object detection and localization system called the simultaneous localization and mapping (SLAM) algorithm, which tracks the mobility of users when AR markers are not visible; (iii) a path-planning system that creates a course towards a destination while avoiding obstacles; and (iv) an acoustic human–machine interface to navigate users in complex navigation courses. Throughout the research and development of this dissertation, each component is analyzed for optimal performance. The navigation algorithms are used to evaluate the performance of the STS-HMI system in a complicated environment with difficult navigation paths. The experimental results confirm that the STS-HMI system advances the mobility of visually impaired people with minimal effort and high accuracy.
Show less