An intersectional framework for assessing counterfactual fairness in risk prediction

Conference: International Conference on Health Policy Statistics 2023
01/11/2023: 10:45 AM - 11:00 AM MST
Contributed 

Description

Along with the increasing availability of health data has come the rise of data-driven models to guide health policy by predicting patient outcomes. Such risk prediction models are used to assess patients' likelihood of certain adverse occurrences and thereby inform interventions. These models have the potential to harness health data to benefit both patients and health care providers. However, risk prediction models also have the potential to entrench or exacerbate health inequities.

Our work proposes a set of statistical tools for assessing the fairness of risk prediction models in a manner relevant to health policy. To our knowledge, our work is the first to develop tools within the counterfactual fairness framework while accounting for multiple, intersecting protected characteristics. Risk prediction models are widely used to guide patient care, and policy decisions such as recent efforts to reduce hospital readmissions have resulted in even wider implementation of the models. Fairness assessment is thus a crucial component of the pipeline from health data to policy, as it helps ensure health data is used in ways that promote equity and center patient outcomes.

As risk prediction models have proliferated, so have techniques for identifying and correcting bias in the models. Broadly constituting the field of "algorithmic fairness", these techniques typically compare some measure of model performance across groups of a social characteristic like race or gender. Our work addresses two aspects that have been less well-explored in the algorithmic fairness literature. We unite these two aspects to offer a unique contribution that is of particular relevance to health policy.

First, most algorithmic fairness work focuses on a single characteristic along which discrimination may occur, for example assessing performance for men vs. women. This simplification fails to account for the fact that discrimination comes in many forms that interact in context-dependent ways. For example, during the COVID-19 pandemic, risk prediction models were used to guide decisions like prioritization of monoclonal antibody treatments. It is well known that older patients and those from racially minoritized groups experience greater risk from COVID-19. However, the effect of age on risk also differs across racial groups. Fairness assessments must therefore consider not just age and race separately, but also how these characteristics interact. The definitions we propose are among only a few fairness techniques to account for multiple, intersecting protected characteristics.

The second under-explored aspect of algorithmic fairness is the fact that in policy contexts, risk predictions are typically used to guide interventions. Recent algorithmic fairness work has demonstrated that when decisions are made on the basis of risk scores, unique types of unfairness can result. The authors of this recent work propose the counterfactual fairness framework to identify and mitigate such biases. However, the framework is designed in non-medical contexts and does not account for multiple, intersecting characteristics as mentioned above.

We propose tools for intersectional, counterfactual fairness measurement designed with particular attention to clinical risk prediction models and health policy contexts. We demonstrate the use of our methods on a COVID-19 risk prediction model used by a major health system. Our fairness measures can be deployed by health systems to evaluate any risk model, giving our work potentially broad implications for the development and implementation of data-driven health policy.

Keywords

Algorithmic fairness

Clinical risk prediction models

Intersectionality

Causal inference

Electronic health record data

COVID-19 risk prediction 

Presenting Author

Solvejg Wastvedt

First Author

Solvejg Wastvedt

CoAuthor(s)

Jared Huling, University of Minnesota
Julian Wolfson, University of Minnesota