The AI Function

Synthetic data for training and validating the AI function

Due to the high variability of possible scenarios and manifestations in which pedestrians can be found in real life and participate in traffic events, the training of the neural networks in KI Absicherung is done with systematically generated synthetic data. This allows the context dimensions and influencing factors of the different scenarios to be easily controlled and varied at will, also allowing for the systematic generation of training and test data sets. In addition, targeted data for so-called corner cases can be methodically derived and created, scenarios that are particularly challenging or difficult for the AI function.

Methods and measures for safeguarding the AI function

Established safety processes cannot easily be transferred to AI-based machine learning methods. For example, after training neural networks it isn’t clear which characteristics of the training data the network has learned. In addition, slightly changed input data can change a module’s correct behaviour into incorrect behaviour.

In KI Absicherung, methods and measures are being developed and combined to systematically determine and reduce such inherent insufficiencies in AI functions. These mechanisms are examined and evaluated with regard to their significance in terms of safety. Thus a list of inherent and systematic insufficiencies of deep neural networks is compiled and used to develop effectiveness and safety metrics for the AI function. The aim is to create a toolbox of methods and measures that are evaluated as regards their effectiveness in safeguarding the AI function.

Comprehensive assurance strategy

The developed toolbox will be used to help validate the safety-relevant effectiveness of the developed measures and to establish the safety argumentation for verifying the safety of the AI function for the pedestrian detection use case. This helps prove sufficient mitigation of the inherent insufficiencies of the AI module.

All work steps and contributions necessary for defining and exemplarily implementing a systematic and comprehensive approach to safeguarding a specific AI function can be summarised under the term Assurance Case. In KI Absicherung, this is driven by the strongly intertwined contributions of the individual subprojects and by linking them all provides the holistic safeguarding strategy for the AI function.