Learning time-scales in two-layers neural networks

Kangjie Zhou First Author
Stanford University
 
Kangjie Zhou Presenting Author
Stanford University
 
Sunday, Aug 4: 3:05 PM - 3:20 PM
3767 
Contributed Papers 
Oregon Convention Center 
Gradient-based learning in multi-layer neural networks displays a number of striking features. In particular, the decrease rate of empirical risk is non-monotone even after averaging over large batches. Long plateaus in which one observes barely any progress alternate with intervals of rapid decrease. These successive phases of learning often take place on very different time scales. Finally, models learnt in an early phase are typically 'simpler' or 'easier to learn' although in a way that is difficult to formalize.

Although theoretical explanations of these phenomena have been put forward, each of them captures at best certain specific regimes. In this paper, we study the training dynamics of a wide two-layer neural network under single-index model in high-dimension. Based on a mixture of new rigorous results, non-rigorous mathematical derivations, and numerical simulations, we propose a scenario for the learning dynamics in this setting. In particular, the proposed evolution exhibits separation of timescales and intermittency. These behaviors arise naturally because the population gradient flow can be recast as a singularly perturbed dynamical system.

Abstracts


Main Sponsor

IMS