SUPR
Diffusion Models for Manipulated Image Generaton
Dnr:

NAISS 2024/22-1298

Type:

NAISS Small Compute

Principal Investigator:

Shizhen Chang

Affiliation:

Linköpings universitet

Start Date:

2024-10-10

End Date:

2025-11-01

Primary Classification:

30501: Forensic Science

Webpage:

Allocation

Abstract

The rapid advancement of artificial intelligence and digital media has made image manipulation increasingly sophisticated and accessible, presenting serious challenges for verifying the authenticity of visual content. Digital forgery techniques such as deepfakes, splicing, copy-move, and object removal are now widely used in various fields, including social media, journalism, and even legal contexts. These manipulations can have significant consequences for public trust, security, and political stability. Current detection methods, while effective to some extent, often struggle to keep pace with the growing complexity and subtlety of manipulations, particularly those generated by modern AI techniques. As a result, there is an urgent need for more robust, scalable, and efficient solutions for detecting and mitigating image manipulation. This project seeks to address these challenges by leveraging diffusion models, a powerful class of generative models, to advance image manipulation detection task. Diffusion models, particularly Denoising Diffusion Probabilistic Models (DDPMs) and Latent Diffusion Models (LDMs), have shown remarkable success in generating high-quality, diverse visual data, making them ideal for simulating a wide range of manipulations. By applying these models, we aim to create a comprehensive, high-quality dataset of synthetically manipulated images that accurately represents real-world forgery techniques. This dataset will serve as a robust foundation for training and evaluating cutting-edge image manipulation detection algorithms. The project will be executed in three primary phases: dataset generation, algorithm development, and evaluation in real-world scenarios. In the first phase, diffusion models will be employed to generate a diverse set of manipulated images. These images will mimic various forgery techniques, such as splicing, removal, and copy-move, across a wide range of environments and manipulation types. The generated dataset will be annotated with detailed information on the types, locations, and nature of the manipulations, making it suitable for rigorous training and evaluation of detection algorithms. In the second phase, novel detection algorithms will be developed by integrating state-of-the-art deep learning techniques, such as convolutional neural networks (CNNs), transformers, and generative adversarial networks (GANs), with diffusion models. These algorithms will be designed to detect subtle image manipulations by learning the unique signatures left by different manipulation methods. Special emphasis will be placed on improving detection accuracy, robustness, and computational efficiency. In the third and final phase, the effectiveness of the developed algorithms will be tested and validated in real-world scenarios. This will include assessing their performance on low-quality images, compressed videos, and images manipulated using techniques not encountered during training. The evaluation will also involve collaboration with social media platforms, law enforcement agencies, and digital forensics teams to deploy and refine the algorithms in real-time environments.