SUPR
Large scale automated delineation of organs at risk for radiotherapy
Dnr:

NAISS 2024/22-1417

Type:

NAISS Small Compute

Principal Investigator:

Attila Simkó

Affiliation:

Umeå universitet

Start Date:

2024-11-27

End Date:

2025-12-01

Primary Classification:

30199: Other Basic Medicine

Webpage:

Allocation

Abstract

Our project aims to advance deep learning-based cancer radiotherapy planning, specifically through automated segmentation models that require high computational efficiency. Precision in segmenting tumors and surrounding organs at risk (OARs) is fundamental in radiotherapy to optimize therapeutic impact while sparing healthy tissue. However, manual delineation of these structures is labor-intensive and inconsistent, limiting the scope and accuracy in current clinical settings. Our objective is to build a high-resolution, automated framework for the simultaneous segmentation of 206 anatomical structures, which will expedite clinical workflows and improve treatment planning precision. Given the computational complexity, efficient model training using advanced GPU infrastructure is essential to meet these goals. The project's incremental learning framework leverages a 3D segmentation model and a large, diverse dataset, currently comprising over 13,000 patient CT scans. This dataset is orders of magnitude larger than what existing publicly available models have handled, making optimized GPU resources critical. Recent advances in loss functions and partially labeled dataset training allow the model to integrate segmentations from multiple, incomplete datasets, a process that significantly expands its anatomical scope. This approach, however, requires substantial GPU power to manage data heterogeneity while maintaining high segmentation accuracy. Current model training has yielded an average Dice score of 32 ± 25 across structures, with particularly promising results on critical organs such as the prostate. Nonetheless, further training is necessary to achieve clinical-grade performance, necessitating repeated model iterations for which additional GPU resources are indispensable. GPU support would also expedite the expert review loop that underpins our model’s continuous improvement cycle. Radiologists assess model-generated segmentations, allowing the integration of feedback to transform initial segmentations into silver-standard datasets. In this way, GPU resources not only accelerate training but also enhance model robustness and segmentation quality by enabling faster iterations between model training and expert evaluation. This feedback loop is integral to scaling up high-quality training data for underrepresented anatomical structures, thereby improving model generalizability and clinical applicability. In addition to segmentation quality, GPU-accelerated training would allow us to explore broader applications of the model in radiotherapy workflows. Specifically, high-quality delineations support non-rigid image registrations by applying rigidity constraints to specific structures, enhancing scan alignment and minimizing registration errors. Moreover, GPU-accelerated models can extend to retrospective analysis within the Swedish Medical Information Quality Archive (MIQA), where additional segmentations provide insights into dose distribution and treatment accuracy, advancing both clinical and research utility. In conclusion, the project’s success depends on efficient and sustained GPU resources to address computational demands associated with training large, anatomically comprehensive segmentation models. This will enable us to reach clinical-quality segmentation at scale, facilitating automated, efficient radiotherapy planning that is both adaptable to diverse clinical datasets and feasible for widespread clinical adoption.