Composite Transportation Divergence and Finite Mixture Models

Qiong Zhang Co-Author
Renmin University of China
 
Jiahua Chen First Author
University of British Columbia
 
Jiahua Chen Presenting Author
University of British Columbia
 
Thursday, Aug 7: 11:20 AM - 11:50 AM
2680 
Contributed Papers 
Music City Center 
When statistical data is large and distributed across multiple locations, initial estimates of the population distribution are often computed locally and then aggregated centrally. For parametric models, simple averaging typically ensures optimal convergence rates. However, in finite mixture models, where the parameter space is non-Euclidean, aggregation requires more refined methods due to computational and statistical challenges.

To address these issues, we propose using composite transportation divergence to aggregate mixture distributions, yielding an estimator that is optimal under the defined criteria. We develop an MM algorithm that guarantees convergence to at least a local optimum in a finite number of iterations. Our approach also applies to Gaussian mixture reduction, approximating a high-order mixture with a lower-order one. Under slightly stronger assumptions, the aggregated estimator retains its optimal convergence rate and can be made tolerant to Byzantine failures.

Keywords

Composite transportation distance

distributed learning

finite mixture model

mixture reduction

MM alrogithm 

Main Sponsor

SSC (Statistical Society of Canada)