SUPR
Deep learning in medical image analysis and the life sciences
Dnr:

NAISS 2025/5-228

Type:

NAISS Medium Compute

Principal Investigator:

Tommy Löfstedt

Affiliation:

Umeå universitet

Start Date:

2025-05-01

End Date:

2026-05-01

Primary Classification:

10210: Artificial Intelligence

Secondary Classification:

20208: Computer Vision and learning System (Computer Sciences aspects in 10207)

Webpage:

Allocation

Abstract

In Sweden, one out of three persons will suffer from cancer at some point during their life span. An aging population will result in more cancer cases, at the same time that fewer citizens will be working. This combination requires the health case system to become more resource efficient. Deep learning offers new perspectives in this regard. Deep convolutional neural networks can be used to automate routine and time-consuming parts of the radiotherapy work-flow. These methods include automatic tumour and risk organ segmentation, synthetic CT generation for dose planning, registration for optical flow adjustments, etc. Deep learning is able to make these work-flows significantly more time-efficient by automating parts that would otherwise take a long time and tie-up human resources, such as oncologists and radiation nurses. For instance, segmenting a patient with head-neck cancer may take up to six hours, while an automatic segmentation may take less than a second. Similarly, a CT scan can take up to 30 minutes to capture, while a synthetic CT image may be generated from an MR image in less than a second. Similarly, recent developments in high-throughput techniques in biology generates data at an ever-growing pace. Such data might be used to develop novel medicines or to improve grop growth. The data collected in life sciences become higher-dimensional and more numerous, leading to automation as the only feasible solution to analyse such data. The last few years have seen a break-through in deep learning usage and methodology, and current methods perform increasing well in an increasing number of tasks. We will develop, utilise, and evaluate deep learning methods for use in radiotherapy and life science applications. Such methods only take an instant to apply, but on a desktop computer with a graphics processing unit it may take weeks, or even months, to train and adapt to the data available. We would therefore like use the parallel infrastructure at C3SE in order to scale the training and hyper-parameter searches to larger training data and more complex models with more parameters.