NEW PROJECT:Safety argumentation using explainable AI for automated driving in rail operations

Bild von tawatchai07 auf Freepik

Can an autonomous train recognize with sufficient certainty whether a deviation in the camera image is caused by a shadow or perhaps by a child playing on the tracks? And how can a person be sure during the safety assessment that the system is not falsely overconfident? How can he ensure that his safety assessment is sufficiently reliable - especially when the technology makes it impossible in principle to provide the relevant evidence?

We are participating in the DZSF research project together with our partners TUD - Chair of Engineering Psychology and Chair of Fundamentals of Electrical Engineering as well as EYYES Deutschland GmbH. In our project XRAISE (Explainable AI for Railway Safety Evaluations), we are researching how Explainable Artificial Intelligence methods can be used to improve the safety evaluation of AI algorithms. The project will be funded by the Federal Railway Authority from 2024-2026.