Meta-Fusion: A Unified Framework For Multi-modality Fusion with Mutual Learning
Annie Qu
Co-Author
University of California At Irvine
Sunday, Aug 3: 5:05 PM - 5:20 PM
2187
Contributed Papers
Music City Center
Multi-modal data fusion has become increasingly critical for enhancing the predictive power of machine learning methods across diverse fields, from autonomous driving to medical diagnosis. Traditional fusion methods—early fusion, intermediate fusion, and late fusion—approach data integration differently, each with distinct advantages and limitations. In this paper, we introduce Meta-Fusion, a flexible and principled framework that unifies these existing approaches as special cases. Drawing inspiration from mutual deep learning and ensemble learning, Meta-Fusion constructs a cohort of models based on various combinations of latent representations across modalities, and further enhances predictive performance through soft information sharing within the cohort. Our approach is model-agnostic in learning the latent representations, allowing it to flexibly adapt to the unique characteristics of each modality. Theoretically, our soft information sharing mechanism effectively reduces the generalization error. Empirically, Meta-Fusion consistently outperforms conventional fusion strategies in extensive synthetic experiments. We further validate our approach on real-world applications, including Alzheimer's disease detection and brain activity analysis.
multi-modality fusion
deep mutual learning
ensemble learning
soft information sharing
Main Sponsor
Section on Statistical Learning and Data Science
You have unsaved changes.