Sprungmarken

Servicenavigation

Hauptnavigation

Sie sind hier:

Hauptinhalt

Detailseite



Perception vs. Reality: Multi-modal data overlaying for object localization and detection

Lehrstuhl: Communication Networks Institute and Institute of Informatik 12

Betreuer: Christian Hakert, Yunfeng Huang,

Beginn ab: 07.10.2019

Maximale Anzahl der Teilnehmer: 8

Beschreibung: The goal of this project group is to exploit multi-modal sensors integrated with lightweight Internet-of-Things (IoT) devices for cross-modal object localization and detection. Perception is the confluence information based sensed data from partial views of the world such as images from visual systems and locations from RF-based positing systems. The former is visual perception, and the latter is radio perception. The reality is a fixed phenomenon in the environment regardless of perception capabilities. However, perceptions sometimes indicate different aspects of reality. For example, Teddy’s tag is localized by radio perception technology such as Bluetooth Low Energy (BLE) and ultrasonic-based localization systems, but the detected object is Spider-man by the visual perception technology as shown in the figure. Therefore, this project group will develop a cross-modal object localization and detection system in an indoor environment. The perception and analytical results from vision-based approaches and sensor-based approaches will be further fused together to provide the cross-verification of reality.

This project group will address the three technical issues: sensing, data analytics, and multi-modal data overlaying. Multi-modal sensors including ultrasonic sensors and Bluetooth sensors will be integrated with lightweight IoT devices to perform localization in the targeted environment. Meanwhile, a camera will be deployed in the targeted environment to perform object detection. Localization algorithms based on multi-modal sensor data will be designed. Furthermore, the results of localization will be incorporated with the results from vision-based object detection algorithms. Therefore, data overlaying algorithms will be designed to perform multi-modal sensor fusion. Finally, the cross-modal analytical results will be presented and visualized through a web service for end-users. An initial prototype system will be developed to conduct experiments in an indoor environment.

To achieve this goal, this project group will focus on the three technical sub-tasks: (1) sensing, (2) analytics, and (3) data overlaying.
• Sensing: The BLE sniffing technology will be implemented on IoT devices for collecting mobility data. Meanwhile, ultrasonic sensors are integrated with the IoT devices to provide proximity information for multi-modal localization. Resource-constrained sensing control and camera sensing algorithms will be developed.
• Analytics: Multi-modal localization algorithms based on proximity information from ultrasonic sensors and BLE information will be designed. Open-source object detection libraries are used for providing visual input to data analytics.
• Data overlaying: A fusion of localization results and object detection results will be represented through web services or mobile applications.

The language of instruction for the project group will be English.

Programming languages for major tasks (but not limited to the following options):
• Sensing: Python, and C.
• Analytics: Python, Java, or others.
• Data overlaying: Python, Java (script), or others.

Technical consultation team: Kuan-Hsun Chen, Fang-Jing Wu, and Jian-Jia Chen


 

Die Projektgruppe findet statt!

Teilnehmer: 184190, 166484, 207214, 207704, 215657, 207702, 214936, 214650, 215791, 207700,