Bridging causality and deep learning with causal generative models
Sunday, Aug 3: 4:05 PM - 4:25 PM
Invited Paper Session
Music City Center
Generative models for vision and language have shown remarkable capacities to emulate creative processes but still lack fundamental skills that have long been recognized as essential for genuinely autonomous intelligence. Difficulties with causal reasoning and concept abstraction highlight critical gaps in current models, despite their nascent capacities for reasoning and planning. Bridging this gap requires a synthesis of deep learning's expressiveness with the powerful framework of statistical causality.
We will discuss our recent efforts towards building generative models that extract causal knowledge from data while retaining the flexibility and expressivity of deep learning. Unlike traditional causal methods that rely on predefined causal structures, we tackle the more complex problem of learning causal structure directly from data—even when the causal variables themselves are not explicitly observed. This introduces significant challenges, including ill-posedness, nonconvexity, and the exponential complexity of combinatorial search. We will outline statistical aspects of these problems and present progress towards resolving these challenges with differentiable approaches to causal discovery and representation learning.
Greedy optimization
Misspecified Nonparametric Model Selection
Laplace approximation
Nonparametric Graphical Models
You have unsaved changes.