SUPR
Towards Poisoning Attacks on Distributed Federated Learning
Dnr:

NAISS 2025/22-278

Type:

NAISS Small Compute

Principal Investigator:

Jingjing Zheng

Affiliation:

Chalmers tekniska högskola

Start Date:

2025-03-06

End Date:

2026-04-01

Primary Classification:

10211: Security, Privacy and Cryptography

Webpage:

Allocation

Abstract

Federated Learning (FL) has emerged as a decentralized machine learning paradigm that enables multiple clients to collaboratively train a global model without sharing their private data. However, this distributed nature also exposes FL to various security threats, including poisoning attacks. In this work, we aim to explore the vulnerabilities of FL against poisoning attacks, where adversarial clients manipulate local updates to degrade global model performance or introduce hidden backdoors. We categorize poisoning attacks into two main types: data poisoning, where adversaries manipulate training data, and model poisoning, where malicious updates are directly injected into the training process. We analyze the impact of these attacks under different aggregation strategies and adversary models, highlighting their effectiveness in compromising model integrity. Finally, we discuss potential defenses and mitigation strategies to enhance the robustness of FL systems. Our findings underscore the urgent need for designing secure and resilient federated learning frameworks to mitigate poisoning threats.