Statistical Learning with Mixture-of-Experts: From Sparse Regularized Estimation in High-Dimensional and Functional Settings to Frugal Distributed Aggregation

Date/heure
26 mars 2026
09:15 - 10:15

Lieu
Salle de conférences Nancy

Oratrice ou orateur
Faïcel Chamroukhi (Caen et Paris Saclay)

Catégorie d'évènement
Séminaire Probabilités et Statistique


Résumé

Modern statistical learning problems often involve heterogeneous, high-dimensional, or distributed data, raising two complementary challenges: controlling model complexity in high dimensions, and enabling frugal inference under distributional constraints, while preserving both structure and statistical consistency.

In this talk, I present contributions centered on mixture-of-experts (MoE) models, a flexible latent variable framework with well-established approximation capabilities and learning guarantees for conditional densities.

I first consider MoE models with high-dimensional predictors, including functional data such as curves and time series, and their training via Lasso-type regularization in unsupervised settings, enabling sparse and interpretable representations together with non-asymptotic model selection guarantees.

I then address learning from data distributed across multiple sites due to storage, computational, or governance constraints, and present a frugal aggregation strategy for MoE models based on optimal transport, which constructs a reduced global estimator from locally trained models in a single communication round while preserving both model structure and statistical consistency, making it particularly well suited to large-scale settings where communication is a major bottleneck.

Together, these contributions position MoE as a unified framework for principled, interpretable, and scalable statistical learning, with perspectives on frugal learning of generative models at the interface of modern machine learning and AI, and applications in constrained industrial environments.

[exposé en français.]