Generator-Mediated Bandits: Thompson Sampling for GenAI-Powered Adaptive Interventions

Gabriel Durham Co-Author
University of Michigan
 
Kihyuk Hong Co-Author
University of Michigan
 
Marc Brooks First Author
 
Marc Brooks Presenting Author
 
Sunday, Aug 3: 2:35 PM - 2:50 PM
1476 
Contributed Papers 
Music City Center 
Recent advances in generative artificial intelligence (GenAI) models have enabled the generation of personalized content that adapts to up-to-date user context. While personalized decision systems are often modeled using bandit formulations, the integration of GenAI introduces new structure into otherwise classical sequential learning problems. In GenAI-powered interventions, the agent selects a query, but the environment experiences a stochastic response drawn from the generative model. Standard bandit methods do not explicitly account for this structure, where actions influence rewards only through stochastic, observed treatments. We introduce generator-mediated bandit-Thompson sampling (GAMBITTS), a bandit approach designed for this action/treatment split, using mobile health interventions with large language model-generated text as a motivating case study. GAMBITTS explicitly models both the treatment and reward generation processes, using information in the delivered treatment to accelerate policy learning relative to standard methods. We establish regret bounds for GAMBITTS by decomposing sources of uncertainty in treatment and reward, identifying conditions where it achieves stronger guarantees than standard bandit approaches. In simulation studies, GAMBITTS consistently outperforms conventional algorithms by leveraging observed treatments to more accurately estimate expected rewards.

Keywords

Thompson Sampling

Contextual Bandit

Just-in-Time Adaptive Interventions

Mobile Health (mHealth)

Reinforcement Learning

Large Language Models (LLMs) 

Main Sponsor

Section on Statistical Computing