We propose to develop and scale a linear-system generalization of MeanFlow for one-step generative modeling, and to deploy and study it on NAISS GPU infrastructure. MeanFlow is a recently introduced framework that replaces long diffusion trajectories with a single “mean flow” of the velocity field, trained via a one-step objective. We reformulate MeanFlow through along-characteristic averaging, derive a differential identity for the mean velocity, and show how a single loss with a stop-gradient target yields stable training and simple backward sampling. We then extend this construction from the standard ODE setting
$$\dot z = v(z,t)$$
to linear controlled dynamics
$$\dot z = A z + B v(t),$$
and further to a noisy linear SDE with Brownian forcing. In each case we obtain closed-form expressions for the mean input flow, evolution equations, and linear MeanFlow identities that recover the original framework when A=0 and B=I.
The goal of the proposed project is to move beyond theory and small prototypes, and to systematically evaluate linear and noisy MeanFlow models at realistic scale. Concretely, we will:
1. Implement GPU-efficient training for linear and noisy MeanFlow using modern deep learning frameworks.
2. Compare their sample quality, training stability, and wall-clock efficiency against diffusion and flow-matching baselines on image, trajectory, and control-style benchmark datasets.
3. Explore second-order extensions where acceleration replaces velocity, and study whether linear structure in the underlying dynamics translates into improved sample efficiency or controllability of the generators.
NAISS GPUs are essential for this project. Training MeanFlow-style models requires repeated evaluation of neural networks and their gradients over large batches and many time points. The linear and noisy variants also involve matrix exponentials, discretized SDE solvers, and windowed losses that are naturally vectorized across GPUs. We plan to use NAISS systems to run controlled ablation studies, hyperparameter sweeps, and large-scale experiments that are not feasible on local hardware.
The expected outcome is a computationally efficient, theoretically grounded family of one-step generative models that sits at the interface of control theory and modern generative modeling. The project will produce open-source implementations, empirical comparisons with diffusion and flow matching, and case studies that highlight how linear structure and stochastic modeling can improve both performance and interpretability of generative models used in scientific and engineering applications.