Continuous Normalising Flows (CNFs) are a form of generative model first proposed by Chen et al. in 2018 that express a mapping between a source and the data distribution as a neural ordinary differential equation (ODE).
Generating data points becomes significantly faster the "straighter" the flows are, as fewer ODEs need to be solved. Tong et al. (2023) made great progress in this by using minibatch optimal transport, but their solution does not scale well to high dimension.
In this project we propose to instead learn a matching in a manner similar to the encoder of a variational autoencoder at the same time as training the CNF.