Sprungmarken

Servicenavigation

Hauptnavigation

Sie sind hier:

Hauptinhalt

Detailseite



Robust Learning of Gestures from Demonstration with Motion Capture Device

Lehrstuhl: Lehrstuhl für Regelungssystemtechnik

Betreuer: Myrel Alsayegh, Frank Hoffmann,

Beginn ab: 19.10.2015

Maximale Anzahl der Teilnehmer: 6

Beschreibung: Configuration and reconfiguration of a robot to perform a task requires a robotic expert to perform a complete analysis of the task and programming the robot.
Human engage many strategies to acquire new skills and adapt them to a novel context. Learning from demonstration or imitation is a means to learn task from observations of a teacher demonstrating the skill. Learning from demonstrations aims at extracting the relevant information which most represents the desired skill or motion rather than explicitly programming a path or trajectory. The aim of this novel paradigm is for robot capabilities to be more easily implemented, extended and adapted to novel situations, even by end-users without programming ability.

In this context, two main questions arise:
1- What to imitate? This question what constitutes the essence of a demonstration.
2- How to imitate? This concerns how to represent and reproduce the demonstrated skill.

Demonstration techniques are different and choosing the suitable one depends on the type of robot and the complexity of the task. The two major demonstration techniques are kinesthetic teaching on the physical robot and recording of the skill on the teacher with motion capture devices.

This project uses a motion capture system to record teacher executions. The motion capture system is composed of multiple cameras that track visual markers attached to the teacher body parts. The skeleton posture is reconstructed from the marker trajectories in 3D space.

This project group addresses the issue of robust representation and encoding as well as robust generalization of the taught skill.
This involves the following tasks:
- Data acquisition and recording of demonstrations:
- Implementing a framework for recording and streaming of gestures, considering
the case of multiple gestures, multiple teachers and imperfect or anomalous demonstrations.
- Definition and recording a large dataset of diverse one and two handed gestures and skills

- Robust encoding and generalization of the skill:
- Classification and encoding of the behavior using Gaussian Mixture Models
- Robust representation and generalization of the acquired knowledge to different contexts
- Determining the constraints necessary to achieve the demonstrated task

Students are expected to have a background in robotics, control theory and optimization. Students are also expected to have a profound programming experience in C++ and Matlab.