Sprungmarken

Servicenavigation

Hauptnavigation

Sie sind hier:

Hauptinhalt

Detailseite



Computer Vision for Mobile Robot Navigation, Localization and Mapping

Lehrstuhl: Lehrstuhl RST

Betreuer: Christop Rösmann, Felipe Posada, Frank Hoffmann,

Beginn ab: 6.10.2014

Maximale Anzahl der Teilnehmer: 12

Beschreibung: In order to perform complex tasks robots require a deeper spatial and semantic understanding of their environment. In particular for tasks that require collaboration with humans robots depend on a semantic representation of space that they share with humans. This includes the classification of places and semantic concepts according to their visual appearance, geometry and topology. The project group investigates and harvests the potential of scene recognition and scene labeling of omnidirectional views for mobile robot navigation, localization and mapping. The objective is to integrate the vision system into the existing ROS navigation stack, localization and mapping algorithms. This project involves the following subtasks

- Core functions: Implementing sonar and laser range sensor navigation, localization and mapping under the ROS framework on the Pioneer 3DX robot platform. Collection of ground truth range, localization and map data at the RST environment.

- Vision based range sensing: Classification of local free space classification to mimic and replace conventional range sensor functionality.

- Probabilistic sensor model: Conception, implementation and verification of a probabilistic sensor model for localization and mapping within the ROS framework.
- Place and scene recognition: Place recognition based on omnidirectional views with SIFT/SURF and HOG features. Place recognition matches the current view with a set of reference views captured at designated locations in the environment. Analysis of place recognition in terms of sensitivity and robustness w.r.t. to pose and illumination.

- Visual homing: Implementation of a visual homing behavior based on omnidirectional views. The objective is to guide the robot towards a reference location with visual servoing. Analysis of homing accuracy and range.

- Topological and semantic mapping: Generate a topological and semantic map of the environment in terms of a graph that represents the spatial relationship among places. The places are augmented with a semantic obtained from scene recognition.
Students are expected to have a background in computer vision and profound programming experience preferably in C/C++.