I'm working on the computer vision problem semantic segmentation which entails classifying each pixel in an image according to a predefined set of classes.
In the specific variant of the problem, unsupervised domain adaptation, it is assumed that labelled data drawn form a "source" domain is available. However, the test data is drawn from another domain which is called the "target" domain. Due to the distribution shift between source and target domain, a model trained solely on the labeled data typically performs poorly on the test data. To overcome this problem, methods of unsupervised domain adaptation aim to leverage the labeled source domain data together with accessible unlabeled target domain data to boost performance on the test set. The best performing methods in the field are deep neural networks (both convolutional neural networks and transformer based architectures) that benefit from large amounts of training data and long training times.
Unsupervised domain adaptive semantic segmentation is the first work package in my PhD studies revolving around making accurate and safe perception algorithms available even in settings with little available labeled data.