Pitfalls and Remedies for Maximum Likelihood Estimation of Gaussian Processes
Ayumi Mutoh
First Author
North Carolina State University
Ayumi Mutoh
Presenting Author
North Carolina State University
Wednesday, Aug 6: 9:35 AM - 9:50 AM
1223
Contributed Papers
Music City Center
Gaussian processes (GPs) are popular as nonlinear regression models for expensive computer simulations. Yet, GP performance relies heavily on estimation of unknown kernel hyperparameters. Maximum likelihood estimation (MLE) is the most common tool, but it can be plagued by numerical issues in small data settings. Penalized likelihood methods attempt to overcome optimization challenges, but their success depends on tuning parameter selection. Common approaches select the penalty weight using leave-one-out cross validation (CV) with prediction error. Although straightforward, it is computationally expensive and ignores the uncertainty quantification (UQ) provided by the GP. We propose a novel tuning parameter selection scheme which combines k-fold CV with a score metric that accounts for GP accuracy and UQ. Additionally, we incorporate a one-standard-error rule to encourage smoother predictions in the face of limited data, which remedies flat likelihood issues. Our proposed tuning parameter selection for GPs matches the performance of standard MLE when no penalty is warranted, excels in settings where regularization is preferred, and outperforms the benchmark leave-one-out CV.
Gaussian processes
Computer experiments
Penalized likelihood
Main Sponsor
Section on Physical and Engineering Sciences
You have unsaved changes.