Cancer is a leading cause of mortality with an expected 10 million deaths annually. A very widely used imaging modality for diagnosing cancer is x-ray computed tomography (CT), which provides three-dimensional images of the human body is reconstructed from x-ray measurements. There are limitations with the current CT technology with respect to diagnostic quality and quantitative accuracy, which the emerging photon-counting CT technology can overcome with its higher spatial resolution, lower image noise, and improved material-selective imaging. This is particularly true for imaging cancer, since developments in radiomics are hampered by the imperfect quantitative accuracy of today’s CT technology.
Deep-learning-based image reconstruction, a new technology for image CT reconstruction, shows promise for substantial image quality improvement and fast reconstruction. We are developing deep-learning-based CT image reconstruction methods especially suited for generating highly accurate photon counting CT images together with maps of image uncertainty, by training deep neural networks able to map measured x-ray imaging data into images with as high accuracy and resolution as possible. Another important application of the combination of photon-counting CT with AI is to improve ultra-low-dose imaging, which is important for example for lung-cancer screening.
During the previous NAISS compute allocations (2023/22-85 and 2024/22-220) we developed a deep-learning based method for correcting for motion artifacts in the images. This is important for quantitatively accurate imaging of lung cancer since artifacts from the beating heart and breathing motion may otherwise lead to errors in measurements of tumours. We have also evaluated the performance of this method in simulation studies and with clinical data from an experimental photon-counting x-ray CT scanner. This proposal, focused on GPU-based training of AI models, will complement the project NAISS 2024/22-925, which is an allocation at PDC intended for CPU-heavy tasks such as running physics simulations of CT for generating training data.
During the proposed project, we will further improve the cancer-imaging capabilities by extending this project in three potential directions: 1) We will continue the investigations into deep-learning based motion artifact correction, in particular by investigating how large amounts of dynamic training CT data can be created using generative AI. 2) We will also investigate the possibility of using deep learning to perform material-selective CT imaging, in order to quantify tissue composition and contrast agent concentration accurately. 3) We will also develop an ultra-low-dose imaging technique for imaging cancer, which uses the available information optimally for doses as low as 10-1000 µSv, which can be used for lung-cancer screening.
The image data used with NAISS compute resources will consist of images of test objects and anonymized datasets from internet databases, i.e. no protected health information will be stored on NAISS servers. After training we will download the models, apply them to patient images acquired with a photon-counting CT prototype and evaluate their clinical usefulness locally.