Towards Inherently Interpretable Neural Networks for Differential Diagnosis
Overview
Towards Inherently Interpretable Neural Networks for Differential Diagnosis of Dementia
Trust and transparency are essential when AI supports high-stakes decisions like healthcare. Differential diagnosis of neurodegenerative diseases is particularly challenging: patients often present overlapping symptoms, and critical information comes from multiple sources—imaging, genetics, and clinical tests. While multi-modal models have improved predictive performance, their complexity can make it hard to understand why a prediction was made.
In this talk, I will first introduce a model for multi-modal dementia diagnosis that combines case-based reasoning with architectures highlighting how different data types influence predictions. Building on this, I will show how these ideas extend to generalizable, inherently interpretable architectures that provide faithful explanations across a wide range of tasks and domains. Together, these approaches demonstrate how AI can be both accurate and transparent, enabling trustworthy decision-making in medicine and beyond.
Short Bio
Tom is a PhD candidate at TU Munich, focusing on the integration of image and tabular data into explainable deep learning models for medical image analysis. Prior to his doctoral studies, he worked as a researcher in the Biomedical Image Analysis group at VRVis and at the Laboratory for Artificial Intelligence in Medical Imaging at LMU. He hold a master’s degree in Biomedical Computing from TU Munich.
Good to know
Highlights
- 1 hour
- Online
Location
Online event
Towards Inherently Interpretable Neural Networks for Differential Diagnosis of
Organised by
Followers
--
Events
--
Hosting
--