SUPR
Multi-camera semantic segmentation
Dnr:

NAISS 2024/22-566

Type:

NAISS Small Compute

Principal Investigator:

Erik Brorsson

Affiliation:

Chalmers tekniska högskola

Start Date:

2024-05-01

End Date:

2025-05-01

Primary Classification:

10207: Computer Vision and Robotics (Autonomous Systems)

Webpage:

Allocation

Abstract

I'm a 2nd PhD student at Chalmers working on fundamental research in computer vision and autonomous systems. My interests include image and video analysis, e.g., semantic segmentation and neural network architectures dedicated to video processing, learning based sensor fusion strategies, e.g., neural network architectures designed to process the input of multiple sensors, and different strategies to train the neural networks used for these problems, e.g., unsupervised domain adaptation (UDA) and semi-supervised learning (SSL). In the first year of my PhD studies (previous project), I studied UDA semantic segmentation, wherein a neural network makes semantic segmentation predictions for images provided by a single camera. In the second year (this project), I will look into bird's eye view semantic segmentation with multiple cameras, i.e., how to fuse information from multiple cameras to improve the predictions. Since multi-camera neural networks typically require a lot of training data, I will also in this case look into methods such as UDA or SSL to mitigate the problem of data collection/annotation. I expect that this will make the developed algorithms more easily adopted by the industry and result in greater benefits for the society.