Learning from Similar Linear Representations: Adaptivity, Minimaxity, and Robustness

Yuqi Gu Co-Author
Columbia University
 
Yang Feng Co-Author
New York University
 
Ye Tian First Author
Columbia University, Department of Statistics
 
Ye Tian Presenting Author
Columbia University, Department of Statistics
 
Wednesday, Aug 7: 11:05 AM - 11:20 AM
2537 
Contributed Papers 
Oregon Convention Center 
Representation multi-task learning (MTL) and transfer learning (TL) are widely used, but their theoretical understanding is limited. Most theories assume tasks share the same representation, which may not hold in practice. We address this gap by studying tasks with similar but not identical linear representations, while handling outlier tasks. We propose two adaptive algorithms robust to outliers under MTL and TL. Our methods outperform single-task or target-only learning with sufficiently similar representations and few outliers. They are also competitive when representations are dissimilar. We provide lower bounds showing our algorithms are nearly minimax optimal and propose an algorithm for unknown intrinsic dimension. Simulation studies confirm our theoretical findings.

Keywords

Transfer learning

Multi-task learning

Representation learning

Low-rank structure

Robustness

Minimax optimality 

Main Sponsor

Section on Statistical Learning and Data Science