Explainable AI

We develop novel techniques for explaining how self-learning AI systems (like Deep Neural Networks) perform their tasks. We particularly focus on providing explanations that are understandable for non-experts while still being faithful. We use established methodology and models from neuroscience and adapt them to build and understand AI systems.

This involves post-hoch techniques for understanding existing AI systems as well as methods to build models that are more interpretable by design.

Selected Publications

  • Relation of Activity and Confidence When Training Deep Neural Networks
    Valerie Krug, Christopher Olson, Sebastian Stober
    In: Machine Learning and Principles and Practice of Knowledge Discovery in Databases. ECML PKDD 2023. Communications in Computer and Information Science, vol 2134, 2025
    [URL]
  • Neuroscience-Inspired Analysis and Visualization of Deep Neural Networks.
    Valerie Krug
    Dissertation/PhD thesis, Otto-von-Guericke-Universität Magdeburg, Fakultät für Informatik, 2024.
    [URL]
  • Visualizing Deep Neural Networks with Topographic Activation Maps.
    Valerie Krug, Raihan Kabir Ratul, Christopher Olson & Sebastian Stober
    In HHAI 2023: Augmenting Human Intellect. IOS Press, 2023. 138-152.
    [URL] [github]

Last Modification: 11.06.2025 -
Contact Person: Webmaster