Deep learning methods have significantly enhanced the accuracy of Visual Place Recognition (VPR), an image-based localization technique. However, their performance often deteriorates in the presence of uncertainty or when deployed in previously unseen environments. A major concern is that such models produce overconfident predictions, which may lead to failure in autonomous navigation, and thus posing risks in safety-critical applications. Ensuring reliable and adaptive localization system therefore requires modeling and quantification of uncertainty.
Visual Place Recognition determines a robot’s position by matching query images against a database of reference images. The goal of this project is to integrate deep learning-based uncertainty quantification methods into VPR systems, with a goal of achieving VPR models that produce reliable prediction with confidence estimates and models that adapt to new unseen environments.