NAISS
SUPR
NAISS Projects
SUPR
Data-Driven Gating of Dynamic PET Using Learned Representations and Topological Data Analysis
Dnr:

NAISS 2026/4-172

Type:

NAISS Small

Principal Investigator:

Nicolas De Bie

Affiliation:

Kungliga Tekniska högskolan

Start Date:

2026-03-19

End Date:

2027-04-01

Primary Classification:

20603: Medical Imaging

Webpage:

Allocation

Abstract

Supervisor: Massimiliano Colarieti-Tosti, Division of Medical Imaging, KTH Royal Institute of Technology Positron Emission Tomography (PET) is a functional imaging modality used extensively in oncology, cardiology, and neurology. Respiratory and cardiac motion during acquisition degrades image quality through blurring, reducing both diagnostic accuracy and quantitative reliability. Data-driven gating (DDG) aims to extract motion signals directly from the acquired list-mode PET data, eliminating the need for external hardware such as respiratory bellows or optical tracking systems. However, existing DDG methods (typically based on centre-of-mass tracking or principal component analysis of sinogram data) rely on hand-crafted features that are sensitive to tracer dynamics, count rates, and scanner geometry, limiting their robustness and generalisability. This project develops a learned DDG approach based on graph neural networks (GNNs) and topological data analysis (TDA). The central idea is to learn a low-dimensional representation of PET list-mode data that is amenable to topological characterisation of quasi-periodic physiological motion. Each temporal frame of list-mode data is represented as a set of lines of response (LORs). We have established that representing LORs as points in any fixed coordinate space fails to preserve physically meaningful proximity between detector pairs, motivating a learned approach. We employ a graph autoencoder architecture that compresses N LOR events per frame into M learned points. This compression enables downstream TDA on the learned representation. The pipeline is validated using the XCAT digital phantom with configurable respiratory motion patterns. Forward projection is performed with parallelproj using the Siemens Biograph mCT scanner geometry, generating realistic list-mode PET data with time-of-flight information. Gating performance is benchmarked against centre-of-mass and PCA-DDG baselines. GPU resources are required for training the graph autoencoder on simulated list-mode datasets of varying count rates, motion patterns, and tracer distributions. The self-supervised training procedure uses contrastive learning and graph reconstruction objectives. Hyperparameter sweeps over graph structure, pooling ratios, and latent dimensionality necessitate repeated training runs. Additionally, generating the training data itself by forward projection of 4D XCAT volumes across multiple respiratory cycles benefits from GPU-accelerated projection operations. This work is part of a Digital Futures Flagship Project at KTH and contributes to the broader goal of enabling robust, hardware-free motion compensation in clinical PET imaging.