Trading off multiple risks for predictive algorithms with confidence

Reese Pathak Co-Author
University of California, Berkeley
 
Anastasios Angelopoulos Co-Author
University of California, Berkeley
 
Stephen Bates Co-Author
Stanford University
 
Michael Jordan Co-Author
Univ of California-Berkeley
 
Andrew Nguyen First Author
University Of California Berkeley
 
Andrew Nguyen Presenting Author
University Of California Berkeley
 
Thursday, Aug 8: 9:50 AM - 10:05 AM
3231 
Contributed Papers 
Oregon Convention Center 
Decision-making pipelines involve trading-off risks with rewards.
It is often desirable to determine how much risk can be tolerated based on measured quantities, using collected data.
In this work, we address this problem, and allow decision-makers to control risks at a data-dependent level.
We demonstrate that, when applied without modification, state-of-the art uncertainty quantification methods can lead to gross violations on real problem instances when the levels are data-dependent.
As a remedy, we propose methods that permit the data analyst to claim valid control.
Our methodology, which is based on uniform tail bounds, supports monotone and nearly-monotone risks, but otherwise makes no distributional assumptions.
To illustrate the benefits of our approach, we carry out numerical experiments on synthetic data and the large-scale vision dataset MS-COCO.

Keywords

Conformal prediction, uncertainty quantification, distribution-free, bootstrap, empirical process theory, simultaneous inference 

Main Sponsor

IMS