Targeted Learning Inference for Variable Importance and Fairness in Machine Learning
Thursday, Aug 7: 9:00 AM - 9:25 AM
Invited Paper Session
Music City Center
This talk develops inference methods for consequences of Machine Learning models. While originally developed as purely predictive tools, there has been increasing inference in inspecting ML models as a means of gaining understanding about the relationships that they uncover and about the consequences of deploying them in the real world. These have been addressed by the development of feature attribution methods and fairness assessments for specific models. However, neither provide uncertainty quantification about the corresponding aspects of the data generation process.
In this talk we show that tools from targetted Machine Learning Estimation (tMLE) are naturally adaptable to these problems, and that doing so is revealing of the regularity of the proposed target. The development of these tools also illuminates the sources of uncertainty for these targets, allowing a discussion of which sources need to be accounted for in any given application.
Interpretable Machine Learning
targetted learning
feature attribution
fairness
xAI
You have unsaved changes.