Cancer is a leading cause of mortality with an expected 10 million deaths annually. A very widely used imaging modality for diagnosing cancer is x-ray computed tomography (CT), which provides three-dimensional images of the human body is reconstructed from x-ray measurements. An emerging technology within cancer imaging is radiomics, where quantitative numerical metrics are derived from measurements on cancer tumours and used to train machine-learning algorithms to predict the disease trajectory of the patient.
There are limitations with the current CT technology with respect to diagnostic quality and quantitative accuracy, which the emerging photon-counting CT technology can overcome with its higher spatial resolution, lower image noise, and improved material-selective imaging. This is particularly true for imaging cancer, since developments in radiomics are hampered by the imperfect quantitative accuracy of today’s CT technology.
Deep-learning-based image reconstruction, a new technology for image CT reconstruction, shows promise for substantial image quality improvement and fast reconstruction. We are developing deep-learning-based CT image reconstruction methods especially suited for generating highly accurate photon counting CT images together with maps of image uncertainty, by training deep neural networks able to map measured x-ray imaging data into images with as high accuracy and resolution as possible. The image data that will be used with NAISS compute resources will consist of images of test objects and anonymized datasets from internet databases, i.e. no protected health information will be stored on NAISS servers. After training we will download the models to our in-house computers, apply them to patient images acquired with a photon-counting CT prototype and evaluate their clinical usefulness.
During the previous NAISS compute allocation (2023/22-85) we developed a deep-learning based method for correcting for motion artifacts in the images. This is important for quantitatively accurate imaging of lung cancer since artifacts from the beating heart and breathing motion may otherwise lead to errors in measurements of tumours. In future work, we will improve on this method by training on anatomically realistic data and incorporating information from the raw data (sinogram) domain.
In addition to motion compensation, we will use deep learning to improve the calibration accuracy of the imaging system by using CT scans of objects of known composition and based on this train deep neural networks to correct for artifacts due to miscalibrations in the measured data, thereby improving the quantitative accuracy of the images.
This project will complement a related project called “Deep-learning data processing for photon-counting CT” that our lab is running on the Berzelius cluster, but with a different focus: developing a quantitatively accurate reconstruction algorithm for cancer radiomics. The anticipated outcome is that photon-counting spectral CT with deep-learning reconstruction can give drastically improved diagnostic quality and radiomic measurement accuracy without extra dose. This can lead to saved lives and new research avenues in the field of data-driven cancer diagnosis.