Reinforcement Learning for Respondent Driven Sampling

Alexander Volfovsky Speaker
Duke University
 
Sunday, Aug 3: 2:45 PM - 3:05 PM
Topic-Contributed Paper Session 
Music City Center 
Populations in the greatest need of health interventions are often the hardest to reach with conventional health policies. From people who are unhoused to people who inject drugs to people who are undocumented, establishing reliable methods of accessing and assisting marginalized communities is important for achieving wholistic public health objectives. Respondent-Driven Sampling is a network sampling method widely used to study hidden or hard-to-reach populations by incentivizing study participants to recruit their social connections. We present reinforcement learning (RL) for respondent-driven sampling, an adaptive RDS study design in which the incentives are tailored over time to maximize cumulative utility during the study. We show that these designs are more efficient, cost-effective, and can generate new insights into the social structure of hidden populations. In addition, we develop methods for valid post-study inference, which are complicated by the adaptive sampling induced by RL as well as the complex dependencies among subjects due to latent (unobserved) social network structure.

Keywords

Reinforcement Learning

Respondent Driven Sampling