Noether Distinguished Scholar Award

Michael Kosorok Chair
University of North Carolina at Chapel Hill
 
Donna LaLonde Organizer
American Statistical Association
 
Emily Fekete Organizer
American Statistical Association
 
Wednesday, Aug 6: 10:30 AM - 12:20 PM
0491 
Invited Paper Session 
Music City Center 
Room: CC-Davidson Ballroom B 

Applied

No

Main Sponsor

Noether Awards Committee

Co Sponsors

Awards Council

Presentations

2025 Noether Early Career Scholar Award - Can quantum algorithms bridge the statistical-computational gap in random combinatorial optimization?

Random combinatorial optimization problems often exhibit statistical-computational gaps in classical regimes. For instance, classical algorithms fail to achieve near-optimal objective values in general q-spin spin-glass models and require significantly higher signal-to-noise ratios to recover the planted signal in spiked-tensor models. One intriguing question is whether quantum algorithms could bridge such statistical-computational gaps. In this talk, we study the Quantum Approximate Optimization Algorithm (QAOA), a general-purpose quantum algorithm for combinatorial optimization. We analyze the performance of constant-depth QAOA on the aforementioned problems that exhibit the classical statistical-computational gaps. Specifically, in the q-spin spin glass models, we characterize the energy levels achieved by QAOA, given by a set of saddle point equations. In the spiked-tensor model, we calculate the asymptotic overlap between the QAOA state and the underlying signal, which exhibits an intriguing sine-Gaussian law. Despite these insights, our findings unfortunately reveal that arbitrary constant-depth QAOA does not surpass classical algorithms in these problems. This suggests that demonstrating the potential quantum advantage of QAOA requires an analysis beyond sub-polynomial algorithmic depth.  

Speaker

Song Mei, UC Berkeley

2025 Noether Early Career Scholar Award - To Intrinsic Dimension and Beyond: Efficient Sampling in Diffusion Models

The denoising diffusion probabilistic model (DDPM) has become a cornerstone of generative AI. While sharp convergence guarantees have been established for DDPM, the iteration complexity typically scales with the ambient data dimension of target distributions, leading to overly conservative theory that fails to explain its practical efficiency. This has sparked recent efforts to understand how DDPM can achieve sampling speed-ups through automatic exploitation of intrinsic low dimensionality of data. This talk explores two key scenarios: (1) For a broad class of data distributions with intrinsic dimension k, we prove that the iteration complexity of the DDPM scales nearly linearly with k, which is optimal under the KL divergence metric; (2) For mixtures of Gaussian distributions with k components, we show that DDPM learns the distribution with iteration complexity that grows only logarithmically in k. These results provide theoretical justification for the practical efficiency of diffusion models. 

Speaker

Yuting Wei, University of Pennsylvania

2025 Noether Distinguished Scholar Award - Taming the Tail with Expected Shortfall Regression

Expected shortfall, which measures the average outcome (e.g., portfolio loss) beyond a specified quantile of its probability distribution, is a widely used financial risk measure. This metric can also be employed to characterize treatment effects in the tail of an outcome distribution, with applications ranging from policy evaluation in economics and public health to biomedical investigations. Expected shortfall regression offers a natural approach for modeling covariate-adjusted expected shortfalls, yet it presents several challenges in estimation and prediction. In this presentation, I will begin with an accessible introduction to expected shortfall regression and share my personal journey in this riveting area of research, which has captivated students and scholars in statistics, econometrics, finance, and operations research. I will then introduce a novel optimization-based approach to linear expected shortfall regression, demonstrating a compelling example of interpretable machine learning in action. Finally, I will conclude with an outline of future work needed to advance the theory and practice of expected shortfall regression in the era of big data. 

Speaker

Xuming He, Washington University in St. Louis