In recent years, our understanding of the structure of loss landscapes in deep neural networks has significantly deepened, revealing insights into their optimization and generalization properties. By leveraging lines in the loss landscape that contain diverse solutions of low loss,
ensembles constructed over a single training run have been shown to outperform ensembles of individually trained models. We propose a method to extend the idea of learning lines in the loss landscape, to distributions over it. By utilizing the learned distributions we construct uncertainty estimates and singe-run ensembles with improved accuracy.