Flow-matching generative models have recently emerged as a powerful, ODE-based alternative to diffusion models, but today’s methods remain limited to centrally stored, balanced and demographically homogeneous data. We propose to push the frontier in three complementary directions:
1. Federated Flow Matching (FFM). We will design privacy-preserving algorithms that train flow-matching models across multiple clients. A key challenge is to train a flow matching model that enjoys fast inference without sharing data with other clients.
2. Fair Flow Matching (FairFM). Building on optimal transport’s ability to align distributions, we will incorporate group-aware cost functions and adversarial regularisers to guarantee equitable generation performance across protected attributes.
3. Unbalanced Flow Matching (UFM). We will extend flow matching to the unbalanced optimal-transport setting, enabling realistic synthesis when source and target datasets differ dramatically in support size or total mass. This is useful when we have outliers in the data distribution.