Masterarbeiten (abgeschlossen)
-
Async LUMPI: a time-asynchronized benchmark for cooperative object detectionCooperative perception, by intelligently integrating data from multiple agents, provides a comprehensive and accurate understanding of the surrounding environment and objects. In this work, cooperative perception is employed for 3D object detection tasks. Existing multi-modal datasets typically timestamp data based on frames, while objects are annotated based on LiDAR point clouds, leading to temporal discrepancies between annotations and frames. This temporal offset is more pronounced in cooperative perception datasets, as the measurements from LiDAR sensors on different agents are asynchronous. The same object may be captured by multiple sensors at different times. However, existing cooperative perception algorithms based on such datasets often overlook this temporal misalignment, resulting in significant errors. Hence, a new benchmark is proposed to address this issue, integrating asynchronous data with detailed timestamps to produce globally synchronized detection results. In this work, the asynchronous point cloud data with detailed timestamps, is provided by a multi-view, intersection dataset, LUMPI dataset. Additionally, annotations in the LUMPI dataset are globally synchronized through interpolation and serve as training and testing targets. Leveraging this new benchmark, a model is developed to utilize the asynchronous data and timestamps in the LUMPI dataset through cooperative perception, enabling the perception of temporal information within single frames and generating accurate globally synchronized timestamps.Leitung: YuanTeam:Jahr: 2024
-
Query-based Semi-automatic Annotation for Cooperative Vulnerable Road User DetectionIn this study, a temporal modeling framework with multi-modality is introduced. Features are extracted from cameras and LiDARs, subsequently transforming them into the Bird's Eye View (BEV) space. This process captures the semantic information from cameras and localization data from LiDARs. A feature queue is designed to retain features from historical frames. The transformer facilitates temporal interaction between the feature queue and the current frame. The model demonstrates significant enhancements in pedestrian detection performance compared to the baseline model, which lacks temporal modeling and only relies on LiDAR information. The final feature passes through an anchor-based head to generate the final prediction. On the LUMPI dataset, an Average Precision (AP) of 95.0 is achieved by the model with an Intersection over Union (IoU) threshold of 0.3, and 77.7 with an IoU of 0.7.Leitung: YuanTeam:Jahr: 2024
-
Localization correction using predicted objects and road geometry for collective perceptionThis study proposes a location correction method based on object detection and road segmentation. By employing the deep learning network GEVBEV, the LiDAR point cloud data collected by different vehicles is converted into 2D bounding boxes and road object segmentation results from a BEV (Bird’s Eye View) perspective. Subsequently, a road geometry is obtained using a route extraction method based on the segmentation results, while certain outliers are eliminated. The road geometry and detected objects serve as features, and the positioning errors between vehicles are corrected through two steps: coarse registration and precise registration. This approach enhances the collaborative perception performance among vehicles.Leitung: YuanTeam:Jahr: 2024
-
Qualitätsbewertung von Kartendaten mittels Crowd-sourced FlottendatenDie Arbeit untersucht die Bewertung von Krümmungswerten, die einerseits aus Navigationskarten extrahiert und andererseits aus Flottendaten generiert wurden, mit dem Ziel, die Fahrsicherheit und den Fahrkomfort zu verbessern. Mit Hilfe von Map-Matching-Verfahren und Datenaggregationstechniken wurden die Fahrzeugdaten den entsprechenden Straßensegmenten auf den Navigationskarten zugeordnet. Die Arbeit zeigte teilweise signifikante Abweichungen in den Krümmungswerten, was die Notwendigkeit einer Optimierung der Kartenbasis unterstreicht, um die Effizienz von Fahrerassistenzsystemen zu steigern und somit einen Beitrag zur Erhöhung der Fahrsicherheit und des Fahrkomforts zu leisten.Leitung: Yuan, SesterTeam:Jahr: 2024
-
Reinforcement learning-based sharing data selection for collective perception of connected autonomous vehiclesIn this thesis, a deep reinforcement learning model is proposed to reduce the redundancy of CPMs for the raw point cloud data sharing scenario in the CAV networks. By combining deep reinforcement learning with collaborative perception, a RL based method for selecting collective perception data that uses the DDQN algorithm is implemented. Through this model, the vehicle can intelligently select the data to be transmitted, thereby eliminating redundant data in the network, saving limited network resources, and reducing the risk of communication network congestion.Leitung: Prof. Markus Fidler, Prof. Monika Sester, Yunshuang Yuan, Shule Li, Sören SchleibaumTeam:Jahr: 2021
-
Using Dynamic Objects for Probabilistic Multi-View Point Cloud Registration and LocalizationRegistering two point clouds involves finding the optimal rigid transformation that aligns those two point clouds. For a connected autonomous vehicle (CAV), an accurate localization for an `ego’ vehicle can be achieved by registering its point cloud to LiDAR data from other connected `cooperative’ vehicles. This paper utilizes an advanced object detection algorithm to select observation points that are on detected vehicles. As a prerequisite, a general probability distribution (cf. left figure) based on the observation points from all detected vehicles is established. For the registration, in the first step, observation points from a cooperative vehicle are assigned to detected bounding boxes. Then, each set of points belonging to one bounding box is registered to the general probability distribution resulting in a `probability map’. In the second step, the probability map is used as shared information and the point cloud of the ego vehicle is registered to it. Different from the Euclidean distance metric of the Iterative Closest Point (ICP) algorithm and the consensus count metric of the maximum consensus method, a new probability-related metric is proposed for a coarse registration. It is used to provide an initial transformation, which is used afterwards in a registration refinement by ICP. The registration is completely based on the vehicle information in the scene. The algorithm is evaluated on the collective perception data set COMAP. Especially for some scenes that are challenging to existing registration algorithms such as scenes in a traffic jam or in an open space where no efficient overlaps of observed static objects exists. For those scenarios, from the perspective of accuracy and robustness, the algorithm has shown good performance. The left figure shows the general distribution of observation points, while the figure on the right shows the registration result between 'probability map' of cooperative vehicle and Lidar points of ego vehicle.Leitung: Brenner, Yuan, AxmannTeam:Jahr: 2021
Offene Masterarbeiten
-
Automatische Annotation historischer KartenSuchmaschinen ermöglichen einen schnellen und gezielten Zugriff auf gespeicherte Inhalte im Internet. Voraussetzung dafür ist jedoch, dass diese Inhalte durch Schlüsselwörter oder Metadaten beschrieben sind. Bei der Suche nach Karten werden typischerweise die Namen oder Kartentypen als Schlüsselwörter verwendet. Will man jedoch auf Karteninhalte zugreifen, zum Beispiel auf Karten mit Laubwäldern, ist es notwendig, dass diese Karteninhalte auch durch Metadaten beschrieben sind. Gleichermaßen sind solche Beschreibungen auch erforderlich, um Blinden oder Menschen mit visuellen Einschränkungen einen Zugang zu ermöglichen Hier setzt die Masterarbeit an: Mit Hilfe von Deep-Learning-Methoden soll eine sogenannte semantische Segmentierung der Karteninhalte in mehrere Landnutzungsklassen vorgenommen werden. Diese Informationen sollen dann in geeigneter Form als Metadaten beschrieben und den Daten hinzugefügt werden.Leitung: Yuan, SesterJahr: 2024