Unchartered Methods In Health Equity Research

Sahar Zangeneh Chair
RTI International
 
Chen-Pin Wang Organizer
UT Health Science Center San Antonio
 
Monday, Aug 5: 10:30 AM - 12:20 PM
1802 
Topic-Contributed Paper Session 
Oregon Convention Center 
Room: CC-B110 

Applied

Yes

Main Sponsor

Health Policy Statistics Section

Co Sponsors

Caucus for Women in Statistics
Justice Equity Diversity and Inclusion Outreach Group
Mental Health Statistics Section

Presentations

Mixtures-of-Network Modeling to Improve Measurement of Structural Racism in Disparity Research

Racial inequities in health outcomes in the United States are actionable. There is urgent need for studies to investigate and then address these. At the same time, the use of "race" as an explanatory variable in such studies is in question, considering that it is a social rather than biological construct. Accordingly, the field has moved towards studying the primary factor that underlies racial differences - structural racism. Structural racism exposure as a construct is vital to capture both for explanatory purposes and as a target for intervention. Structural racism domains have been elucidated in the literature, making the measurement task partly amenable to latent variable modeling. This paper argues that additional considerations are needed, however, including mutual reinforcement between domains and contextual specificity by place and time in the life course. We propose models unifying mixtures and network structure to accommodate these. Methods are illustrated using publicly available data on structural factors spanning the US and multiple time periods. The proposed methods aim to equip researchers with improved measures to elucidate and address health disparities. 

Speaker

Karen Bandeen-Roche, Johns Hopkins University

Estimating Geographic Variation and Disparities in Disease Prevalence using the National Health Interview Survey

The United States continues to experience substantial demographic disparities in health and well-being. To inform more equitable delivery of health services and achieve health equity, there is need for reliable estimates of disease prevalence and other health-related factors for various geo-demographic groups. Sample surveys are central to population-based estimation of disease prevalence. However, national surveys are often designed to achieve adequate precision nationally and may lack the sample size for more granular estimates. Obtaining more granular statistics relies heavily on models and auxiliary data, which may vary in timeliness, availability, and quality. We will describe challenges in obtaining timely state-level estimates of disease prevalence by various demographics using data from the National Health Interview Survey. We will outline the steps involved in selecting predictors; determining the appropriate model and software; and producing, validating, and reporting the estimates. Model-based estimates will be compared with those obtained through the direct design-based method, as well as with estimates from the Behavioral Risk Factor Surveillance System when available. 

Co-Author(s)

Lauren Rossen, National Center for Health Statistics
Matthew Williams, RTI International
Sahar Zangeneh, RTI International

Speaker

Stephanie Zimmer, RTI International

Principal Disparity Estimators That Mitigate Measurement Biases

We considered methods to assess counterfactual disparity of an endpoint outcome conditioned on principal strata of an intermediate variable that is prone to measurement errors and subsequent misclassification of the principal strata. The proposed method incorporated fairness algorithms (for risk adjustments) with Bolk-Croon-Hagenaars method to mitigate measurement bias. We considered 1-step and 3-step error-less ML estimators to derive 'principal disparity' under respective data-guided identification assumptions. Efficiency, consistency and utilities of the proposed estimators are compared.  

Speaker

Chen-Pin Wang, UT Health Science Center San Antonio

Advancing Algorithmic Fairness: A Statistical Learning Approach with Fairness Constraints

Statistical machine learning algorithms, crucial in sectors like hiring, finance, and healthcare, risk reinforcing societal biases based on gender, race, religion, among others. To combat this, it's vital to design models adhering to fairness norms. This involves embedding fairness constraints such as 'equal opportunity' [Hardt et al., 2016], ensuring uniform true positive rates across groups, and 'path-specific counterfactual fairness' [Nabi and Shpitser, 2018, Nabi et al., 2019], which restricts the effect of the sensitive feature on the outcome along certain user-specified mediating pathways. Without favoring a specific fairness criterion, we propose a general framework for deriving optimal prediction functions under various constraints. It conceptualizes the learning problem as estimating a constrained functional parameter within a comprehensive statistical model, using a Lagrange-type penalty. Key contributions of our work include a flexible framework for solving constrained optimization problems, closed-form solutions for specific fairness constraints, and an algorithm-neutral approach to fair learning.  

Speaker

Razieh Nabi, Emory University, Rollins School of Public Health

The Measure and Mismeasure of Fairness

The field of fair machine learning aims to ensure that decisions guided by algorithms are equitable. Over the last decade, several formal, mathematical definitions of fairness have gained prominence. Here we first assemble and categorize these definitions into two broad families: (1) those that constrain the effects of decisions on disparities; and (2) those that constrain the effects of legally protected characteristics, like race and gender, on decisions. We then show, analytically and empirically, that both families of definitions typically result in strongly Pareto dominated decision policies. For example, in the case of college admissions, adhering to popular formal conceptions of fairness would simultaneously result in lower student-body diversity and a less academically prepared class. In this sense, requiring that these fairness definitions hold can, perversely, harm the very groups they were designed to protect. In contrast to axiomatic notions of fairness, we argue that the equitable design of algorithms requires grappling with their context-specific consequences, akin to the equitable design of policy. 

Speaker

Johann Gaebler, Stanford University