Online Variational Inference in Dynamic State-Space Models
This doctoral research project focuses on the development of Online Variational Inference (VI) algorithms for dynamic generative models, specifically non-linear State-Space Models (SSMs). In modern artificial intelligence applications where data arrives as continuous, noisy streams, the ability to perform inference and parameter learning on-the-fly is critical. The primary objective of this project is to efficiently approximate the joint smoothing distribution, (the probability of latent states given all observations) and simultaneously estimate model parameters in settings where analytical solutions are unavailable.
The core of this project lies in constructing a recursive variational framework that maximizes the Evidence Lower Bound (ELBO) in a sequential manner. A major challenge in this setting is that the expectations required to compute the gradient of the ELBO are generally intractable for non-linear systems. To overcome this, the project relies on advanced particle-based approximations to estimate these gradients dynamically.
We integrate importance-weighted estimators, specifically utilizing Self-Normalized Importance Sampling (SNIS) and Sequential Monte Carlo (SMC) methods, directly into the variational optimization loop. These sampling techniques are employed to approximate the complex posterior flows and provide low-variance estimates of the gradient steps required for stochastic optimization. By leveraging the flexibility of SNIS and SMC within the variational framework, we aim to develop algorithms that are both computationally efficient and capable of capturing complex, multi-modal latent dynamics that standard mean-field approximations fail to represent. The research involves both the algorithmic design of these online estimators and the rigorous validation of their asymptotic properties against data streams.