Causal Invariance Learning via Efficient Optimization of a Nonconvex Objective

Yifan Hu Co-Author
College of Management of Technology, EPFL
 
Peter Bühlmann Co-Author
ETH Zurich
 
Zijian Guo Co-Author
Rutgers University
 
Zhenyu Wang First Author
Rutgers University
 
Zhenyu Wang Presenting Author
Rutgers University
 
Tuesday, Aug 5: 2:50 PM - 3:05 PM
2277 
Contributed Papers 
Music City Center 
Data from multiple environments offer valuable opportunities to uncover causal relationships among variables. We propose nearly necessary and sufficient conditions for ensuring that the invariant prediction model matches the causal outcome model. Exploiting the essentially necessary identification conditions, we introduce Negative Weight Distributionally Robust Optimization (NegDRO), a nonconvex continuous minimax optimization whose global optimizer recovers the causal outcome model. Unlike standard group DRO problems that maximize over the simplex, NegDRO allows negative weights on environment losses, which break the convexity. Despite its nonconvexity, we demonstrate that a standard gradient method converges to the causal outcome model, and we establish the convergence rate with respect to the sample size and the number of iterations. Unlike the existing causal invariance learning approaches, our algorithm avoids exhaustive search, making it scalable especially when the number of covariates is large.

Keywords

Causal invariance learning

Nonconvex optimization

Computationally efficient causal discovery

Multi-source data 

Main Sponsor

Section on Statistical Learning and Data Science