Learning from Similar Linear Representations: Adaptivity, Minimaxity, and Robustness
Abstract Number:
2537
Submission Type:
Contributed Abstract
Contributed Abstract Type:
Paper
Participants:
Ye Tian (1), Yuqi Gu (2), Yang Feng (3)
Institutions:
(1) Columbia University, Department of Statistics, N/A, (2) Columbia University, N/A, (3) New York University, N/A
Co-Author(s):
First Author:
Ye Tian
Columbia University, Department of Statistics
Presenting Author:
Ye Tian
Columbia University, Department of Statistics
Abstract Text:
Representation multi-task learning (MTL) and transfer learning (TL) are widely used, but their theoretical understanding is limited. Most theories assume tasks share the same representation, which may not hold in practice. We address this gap by studying tasks with similar but not identical linear representations, while handling outlier tasks. We propose two adaptive algorithms robust to outliers under MTL and TL. Our methods outperform single-task or target-only learning with sufficiently similar representations and few outliers. They are also competitive when representations are dissimilar. We provide lower bounds showing our algorithms are nearly minimax optimal and propose an algorithm for unknown intrinsic dimension. Simulation studies confirm our theoretical findings.
Keywords:
Transfer learning|Multi-task learning|Representation learning|Low-rank structure|Robustness|Minimax optimality
Sponsors:
Section on Statistical Learning and Data Science
Tracks:
Machine Learning
Can this be considered for alternate subtype?
No
Are you interested in volunteering to serve as a session chair?
Yes
I have read and understand that JSM participants must abide by the Participant Guidelines.
Yes
I understand that JSM participants must register and pay the appropriate registration fee by June 1, 2024. The registration fee is non-refundable.
I understand
You have unsaved changes.