This project aims to create multimodal sensor fusion methods for advanced and robust automotive perception systems. The project will focus on three key areas: (1) Develop multimodal fusion architectures and representations for both dynamic and static objects. (2) Investigate self-supervised learning techniques for the multimodal data in an automotive setting. (3) Improve the perception system’s ability to robustly handle rare events, objects, and road users.