Transferred Deep Q*-Learning for Offline Non-Stationary Reinforcement Learning

Elynn Chen Speaker
 
Sunday, Aug 3: 2:55 PM - 3:20 PM
Invited Paper Session 
Music City Center 
In dynamic decision-making scenarios across business and healthcare, leveraging sample trajectories from diverse populations can significantly enhance reinforcement learning (RL) performance for specific target populations, especially when sample sizes are limited. While existing transfer learning methods primarily focus on linear regression settings, they lack direct applicability to reinforcement learning algorithms. This paper pioneers the study of transfer learning for dynamic decision scenarios modeled by non-stationary finite-horizon Markov decision processes, utilizing neural networks as powerful function approximators and adaptive algorithmic learning. We demonstrate that naive sample pooling strategies, effective in regression settings, fail in Markov decision processes. To address this challenge, we introduce a novel {\it ``re-weighted targeting procedure''} to construct {\it ``transferable RL samples''} and propose {\it ``transfer deep $Q^*$-learning''}, enabling neural network approximation with theoretical guarantees. We assume that the reward functions are transferable and deal with both situations in which the transition density ratios are transferable or nontransferable. Our analytical techniques for transfer learning in neural network approximation and transition probability transfers have broader implications, extending to supervised transfer learning with neural networks and domain shift scenarios. Empirical experiments on both synthetic and real datasets corroborate the advantages of our method, showcasing its potential for improving decision-making through strategically construct transferable RL samples in non-stationary reinforcement learning contexts.

Keywords

Finite-horizon Markov decision processes; Non-stationary; Backward inductive $Q^*$-learning; Transfer learning; Neural network approximation;