Reinforcement Learning For Respondent-Driven Sampling

Angela Yoon Co-Author
Duke University
 
Yichi Zhang Co-Author
Department of Computer Science and Statistics, University of Rhode Island
 
Alexander Volfovsky Co-Author
Duke University
 
Eric Laber Co-Author
 
Justin Weltz Speaker
Santa Fe Institute
 
Tuesday, Aug 5: 8:35 AM - 8:55 AM
Topic-Contributed Paper Session 
Music City Center 
Respondent-driven sampling (RDS) is widely used to study hidden or hard-to-reach populations by incentivizing study participants to recruit their social connections. The success and efficiency of RDS can depend critically on the nature of the incentives including their number, value, call to action, etc. Standard RDS uses an incentive structure that is set a priori and held fixed throughout the study and thus does not make use of accumulating information on which incentives are effective and for whom. We propose a reinforcement learning (RL) based adaptive RDS study design in which the incentives are tailored over time to maximize cumulative utility during the study. We show that these designs are more efficient, cost-effective, and can generate new insights into the social structure of hidden populations. In addition, we develop methods for valid post-study inference which are non-trivial due to the adaptive sampling induced by RL as well as the complex dependencies among subjects due to latent (unobserved) social network structure. We provide asymptotic regret bounds and illustrate its finite sample behavior through a suite of simulation experiments.

Keywords

Respondent-driven sampling, Reinforcement learning, Branching processes