Our project aims to advance deep learning-based radiotherapy planning through the development of an end-to-end treatment planning model. This model will directly predict deliverable treatment plans from patient CT images and anatomical structures, automating and optimizing the radiotherapy workflow. Precision in identifying tumors and surrounding organs at risk (OARs) remains fundamental, as accurate segmentation is a prerequisite for high-quality treatment planning. However, manual delineation is labor-intensive and inconsistent, limiting efficiency and precision in current clinical practice. Our objective is to build a high-resolution, automated framework capable of segmenting 206 anatomical structures, providing the robust anatomical input necessary for end-to-end treatment planning. This will expedite clinical workflows and improve treatment quality.
The project leverages a 3D segmentation model trained on a large, diverse dataset comprising over 13,000 patient CT scans. This dataset is orders of magnitude larger than those used in existing publicly available models, making optimized GPU resources critical. Recent advances in loss functions and partially labeled dataset training enable the model to integrate segmentations from multiple, incomplete datasets, expanding its anatomical coverage. This approach, however, requires substantial computational power to manage data heterogeneity while maintaining segmentation accuracy. Current training has yielded an average Dice score of 32 ± 25 across structures, with promising results for critical organs like the prostate. Achieving clinical-grade segmentation performance is essential because these segmentations serve as the foundational input for our downstream treatment planning model. Thus, continued model training and GPU resources are indispensable.
Beyond segmentation, GPU support is crucial for training the end-to-end treatment planning model. This model will use the segmented structures, along with the CT image, to predict a complete, clinically deliverable treatment plan. By automating this process, we aim to reduce planning time from days to hours while ensuring consistency and precision in dose delivery. Training such a model requires vast computational resources, as it involves learning the complex spatial relationships between anatomical structures, dose constraints, and beam arrangements.
Additionally, GPU-accelerated training allows us to explore complementary applications within radiotherapy workflows. Accurate segmentations facilitate non-rigid image registration by applying rigidity constraints to specific structures, enhancing scan alignment and minimizing registration errors. Moreover, retrospective analysis of dose distributions and treatment accuracy within the Swedish Medical Information Quality Archive (MIQA) can further refine our planning model, improving both clinical and research utility.
In summary, the success of our project hinges on sustained access to GPU resources to meet the computational demands of developing an end-to-end treatment planning model. High-quality, automated segmentation is a critical enabler of this goal, serving as the anatomical input that underpins accurate treatment planning. Efficient GPU support will allow us to achieve clinical-grade performance, ensuring the model is adaptable to diverse clinical datasets and feasible for widespread clinical adoption.