We will develop and evaluate generative data augmentation for PET/CT under small-sample and multi-centre conditions. The project compares latent-diffusion and VAE-based augmentations against classical transforms and mixup-style baselines, and measures whether such augmentation improves robustness and calibration of downstream models. Two tasks are considered: (1) PET/CT image quality assessment (AUROC, AUPRC, ECE) and (2) lesion segmentation (Dice, HD95). We will perform cross-site experiments to quantify generalisation under domain shift and label-limited regimes, and run corruption/stress tests for sensitivity analyses.
Data are de-identified and covered by KI ethical approvals; only authorised members can access the project space and only aggregated metrics will be shared publicly. Software is PyTorch/MONAI with containerised environments for full reproducibility. Expected outputs include reusable augmentation pipelines, trained baselines with and without generative augmentation, and a short technical report acknowledging NAISS. This small-compute allocation will enable systematic ablations and efficient benchmarking without large-scale pretraining.