Sie sind hier:



360° Surround View Assistant

Lehrstuhl: Institute of Control Theory and Systems Engineering

Betreuer: Christian Wissing, Martin Krüger, Franz Albers, Manuel Schmidt,

Beginn ab: 01.04.2018

Maximale Anzahl der Teilnehmer: 12

Beschreibung: There have been many advances in the field of digital image processing recently. Convolutional Neural Networks became the state of the art method for many object detection, segmentation, classification and regression tasks with camera images as inputs. The automotive industry is aware of these advances and there is a clear trend to incorporate multiple cameras in the perception system of automated driving architectures.

This project group aims at the development of a 360° Surround View Assistant.

It includes the following work items:

- Image Retrieval using Simulations:
Using a simulation software, multiple cameras can be placed virtually on the ego-vehicle to obtain data for the development of algorithms.

- Image Stitching:
Stiching of the distinctive camera images of all cameras to obtain a surround view.

- Lane Marker Detection:
Development of algorithms for the detection of lane markers (type, position, course, ...).

- Estimation of the Normal Vector of the Road Surface:
Recovery of the orientation of a camera relative to a road surface patch ahead of the ego-vehicle. This information can be used to adapt the calibration of cameras during driving.

- Object Detection and Tracking:
First, objects should be detected in image coordinates and afterwards transformed into the ego-vehicle coordinate system. Detected objects should then be tracked over multiple frames.

- Camera Synchronization:
Software-based synchronization of cameras, especially important for the development of a GUI.

- Development of a Touchscreen GUI:
Using a touchscreen, a birds-eye-view should be generated in order to visualize the surroundings of the ego-vehicle.

- Segmentation:
Development of a segmentation algorithm to obtain richer information about the static surroundings of the ego-vehicle.

- Sensorfusion:
All processed information of every distinct camera are fused to obtain a model of the environment.