Online Quantile Regression
Dong Xia
Co-Author
Hong Kong University of Science and Technology
Yinan Shen
First Author
University of Southern California
Yinan Shen
Presenting Author
University of Southern California
Sunday, Aug 3: 3:20 PM - 3:35 PM
1329
Contributed Papers
Music City Center
This paper tackles the challenge of integrating sequentially arriving data within the quantile regression framework, where the number of covariates is allowed to grow with the number of observations, the horizon is unknown, and memory is limited. We employ stochastic sub-gradient descent to minimize the empirical check loss and study its statistical properties and regret performance. In our analysis, we unveil the delicate interplay between updating iterates based on individual observations versus batches of observations, revealing distinct regularity properties in each scenario. Our method ensures long-term optimal estimation irrespective of the chosen update strategy. Importantly, our contributions go beyond prior works by achieving exponential-type concentration inequalities and attaining optimal regret and error rates that exhibit only short-term sensitivity to initial errors. A key insight from our study is the delicate statistical analyses and the revelation that appropriate stepsize schemes significantly mitigate the impact of initial errors on subsequent errors and regrets. This underscores the robustness of stochastic sub-gradient descent in handling initial uncertainties,
online linear regression
quantile regression
nonsmooth optimization
sub-gradient descent
batch learning
Main Sponsor
Isolated Statisticians
You have unsaved changes.