Ambiguous Interventions and Causal Abstractions
Overview
Welcome to Ambiguous Interventions and Causal Abstractions!
In many applications that are modelled using the tools of causal inference, we are aiming to infer the effects of interventions (or treatments) that are ambiguous: a single treatment value T=1 can be instantiated differently across different units. If the effect of the treatment on a unit depends on the manner in which it is instantiated, the standard methods of causal inference are no longer applicable. Therefore a more general causal inference framework is needed. At first sight, causal abstractions seem to offer such a framework. I argue that this is not true, and that in fact there currently exist almost no causal inference frameworks that are able to tackle ambiguous interventions. I conclude by suggesting how nondeterministic causal models could help in overcoming this challenge.
Short Biography
Dr Sander Beckers is a researcher whose work bridges computer science, statistics, causality, AI ethics, and the philosophy of science. He earned his PhD from KU Leuven and has held research appointments at several leading international institutions. Currently based at University College London (UCL), he focuses on causal modelling in artificial intelligence and healthcare. His research explores how causal explanation, counterfactual reasoning, and model abstraction can advance the development of trustworthy and interpretable AI systems. Within UCL’s Causality in Healthcare AI Hub (CHAI), Dr Beckers contributes to projects that apply causal reasoning to advance healthcare AI making sure such models are not only predictive, but also explainable, responsible, and ethically sound.
Good to know
Highlights
- 1 hour
- Online
Location
Online event
Organised by
Followers
--
Events
--
Hosting
--