Autotune: fast, efficient, and automatic tuning parameter selection
for LASSO
Sumanta Basu
Co-Author
Cornell University Department of Statistics and Data Science
Monday, Aug 4: 2:50 PM - 3:05 PM
2224
Contributed Papers
Music City Center
Tuning parameter selection for penalized regression methods such as LASSO is an important issue in practice, albeit less explored in the literature of statistical methodology. Most common choices include cross-validation (CV), which is computationally expensive, or information criterions such as AIC/BIC, which are known to perform worse in high-dimensional scenarios. Guided by the asymptotic theory of LASSO that connects choice of tuning parameter λ to estimation of error standard deviation σ, we propose autotune, an automatic tuning algorithm that alternately maximizes a penalized log-likelihood over regression coefficients β and the nuisance parameter σ. The core insight behind autotune is that under exact or approximate sparsity conditions, estimation of the scalar nuisance parameter σ may often be statistically & computationally easier than estimation of the high-dimensional regression parameter β, leading to a gain in efficiency. Using simulated & real data sets, we show that autotune is faster, & provides superior estimation, variable selection and prediction performance than existing tuning strategies for LASSO as well as alternatives such as the scaled LASSO.
Tuning
Biconvex Optimization
Linear Models
High Dimension
Cross Validation
Noise Variance
Main Sponsor
Section on Statistical Computing
You have unsaved changes.