Adaptive Multi-fidelity Optimization via Online EM with Applications to Digital Design Selection

Jiayang Sun Co-Author
George Mason University
 
Wei Dai First Author
George Mason University
 
Wei Dai Presenting Author
George Mason University
 
Monday, Aug 4: 8:45 AM - 8:50 AM
2521 
Contributed Speed 
Music City Center 
This work addresses the challenge of optimal resource allocation in digital experimentation, where computational budgets must be efficiently distributed across competing model configurations to identify the best-performing design.
Building upon the multi-fidelity framework of Peng et al. (2019), which integrated low and high-fidelity observations for ranking and selection procedures, we propose significant methodological advances through online stochastic approximation techniques.
Our key innovation is an online variant of the Expectation-Maximization (EM) algorithm that incorporates stochastic approximation principles, enabling efficient parameter estimation for latent variable models in streaming settings.
Unlike previous batch EM methods, our approach processes observations sequentially while maintaining theoretical convergence guarantees, established through rigorous martingale-based analysis.
We prove the algorithm achieves asymptotic efficiency equivalent to the maximum likelihood estimator while requiring substantially less computational overhead.

Keywords

Digital experimentation

ranking and selection

online learning 

Main Sponsor

Section on Statistical Learning and Data Science