MA Verteidigung Henry Beiker
MA Verteidigung Henry Beiker
- https://www.informatik.hu-berlin.de/de/events/ma-verteidigung-henry-beiker
- MA Verteidigung Henry Beiker
- 2025-07-16T12:00:00+02:00
- 2025-07-16T23:59:59+02:00
- MA Verteidigung Henry Beiker
- Wann 16.07.2025 ab 12:00 Uhr
- Wo Fraunhofer FOKUS im Raum 3022
- Name des Kontakts Prof. Holger Schlingloff
-
iCal
Liebe Institutsangehörige,
am Mittwoch, den 16. Juli 2025, um 12:00 Uhr wird
Herr Henry Beiker seine Masterarbeit mit dem Titel
“Improving object detection models by simulating
false negative prone scenarios”
verteidigen.
Betreuer waren Prof. H. Schlingloff und Dr. J. Großmann.
Die Verteidigung findet bei Fraunhofer FOKUS im Raum 3022
sowie online bei Zoom statt unter
https://hu-berlin.zoom-x.de/j/66149311489
Interessenten sind herzlich eingeladen.
Abstract:
The increasing advancements in artificial intelligence (AI) have paved the way for its application in various fields, including the automation of train systems. One of the critical components for achieving fully autonomous trains is the object detection system, which is responsible for identifying obstacles, humans, and relevant signals within the train’s operational area. While humans can easily interpret and respond to these elements under diverse conditions, ensuring that AI-based systems can perform this task accurately, especially under challenging scenarios, is essential for safe autonomous operation.
Current methods for evaluating AI models typically involve splitting available data into training and test sets. However, this approach falls short in safety-critical situations where datasets often lack difficult or adverse conditions, leading to models that perform well under normal conditions but fail in real-world, high-risk scenarios. Such failures could have catastrophic consequences in autonomous train operations.
This work addresses this issue by developing a simulation-based methodology designed to expose AI object detection models to complex, error-prone situations that are underrepresented in real-world datasets.This approach aims to identify weaknesses and validate the performance of AI models through targeted testing, ultimately enhancing their robustness and reducing the likelihood of accidents caused by false-negative classifications in autonomous train systems. The focus lies solely on the false-negative prone scenarios because in those cases human life can be endangered. In cases of false positives, the risk to human life is comparatively lower, as the primary consequence is often an abrupt stop, leading mostly to monetary costs rather than safety-critical outcomes.
Gruß,
HS
Prof. Dr. Holger Schlingloff
Inst. f. Informatik, Humboldt-Universität zu Berlin
GFaI e.V., Volmerstr. 3, 12489 Berlin
FhG-FOKUS, Kaiserin-Augusta-Allee 31, 10589 Berlin
ZeSys e.V., Wagner-Régeny-Str. 16, 12489 Berlin
Tel. mobil: 0151 5186 3563