A critical aspect of modern machine learning (ML) applications is the optimization or inference task, often referred to as the learning process. This project aims to investigate state-of-the-art learning processes to enhance our understanding of their efficacy and to advance the current frontier of these algorithms. A primary focus is on generalization and robustness in deep learning settings, examining the relationship between these properties and the local geometry of the loss surface. Potential improvements in learning algorithms could advance state-of-the-art deep learning models or reduce the training and inference costs for models achieving a given quality. Studying the loss landscape can provide deeper insights into the performance of contemporary AI models, bringing us closer to reliable and safe applications.
Additionally, this project aims to explore simulation-based inference within the context of stochastic chemical kinetics, with an emphasis on scalable Bayesian inference using non-homogeneous time stepping in the data.