Sie sind hier:



Environment perception for autonomous driving

Lehrstuhl: Institute of Control Theory and Systems Engineering

Betreuer: Manuel Schmidt, Niklas Stannartz,

Beginn ab: upon consultation (nach Absprache)

Maximale Anzahl der Teilnehmer: 6

Beschreibung: Nowadays, autonomous driving is not only a futuristic dream anymore, but may become a reality in the next years. The automotive industry is working continuously on the development of new Advanced Driver Assistance Systems (ADAS) that will initially assist the driver in various safety-critical situations and eventually take over the driving task completely. Besides actual car manufacturers like Daimler or Tesla, especially IT giants like Google are exhaustively testing their prototype vehicles at the moment.
The basis for autonomous driving is to make the vehicle “see” the environment like a human. This visual sense is provided by environmental sensors like camera, Radar and, more recently, Lidar sensors that deliver an extremely high-resolution image of the environment even during night or bad weather operations where camera sensors fail. A complementary usage of different sensor technologies is therefore mandatory for an intelligent and accident-free autonomous car.
This project group aims at the development of algorithms that enable the vehicle to “see” and “understand” its environment. Possible work packages are:
• Localization and Mapping: To perform autonomous driving tasks, the vehicle has to know exactly where it is in relation to its environment. This usually includes the generation of a map as well as the localization in this map. Here, state-of-the-art algorithms out of the field of robotics show the most promising results which can be evaluated using an on-board RTK-GPS system with centimeter precision. Available sensors are camera and lidar.
• Equipment of a second test vehicle with a RTK-GPS system such that tracking algorithms can be evaluated. Furthermore, a synchronization concept for the Nissan Leaf has to be implemented.
• Implementation of a lane marking detection algorithm using RANSAC and/or deep learning. Available sensors are camera and lidar