Causal Invariance Learning via Efficient Optimization of a Nonconvex Objective
Yifan Hu
Co-Author
College of Management of Technology, EPFL
Tuesday, Aug 5: 2:50 PM - 3:05 PM
2277
Contributed Papers
Music City Center
Data from multiple environments offer valuable opportunities to uncover causal relationships among variables. We propose nearly necessary and sufficient conditions for ensuring that the invariant prediction model matches the causal outcome model. Exploiting the essentially necessary identification conditions, we introduce Negative Weight Distributionally Robust Optimization (NegDRO), a nonconvex continuous minimax optimization whose global optimizer recovers the causal outcome model. Unlike standard group DRO problems that maximize over the simplex, NegDRO allows negative weights on environment losses, which break the convexity. Despite its nonconvexity, we demonstrate that a standard gradient method converges to the causal outcome model, and we establish the convergence rate with respect to the sample size and the number of iterations. Unlike the existing causal invariance learning approaches, our algorithm avoids exhaustive search, making it scalable especially when the number of covariates is large.
Causal invariance learning
Nonconvex optimization
Computationally efficient causal discovery
Multi-source data
Main Sponsor
Section on Statistical Learning and Data Science
You have unsaved changes.