NAISS
SUPR
NAISS Projects
SUPR
Resilient, Efficient, and Trustworthy Federated Intelligence
Dnr:

NAISS 2026/4-192

Type:

NAISS Small

Principal Investigator:

Shudi Weng

Affiliation:

Kungliga Tekniska högskolan

Start Date:

2026-02-05

End Date:

2026-10-01

Primary Classification:

10210: Artificial Intelligence

Allocation

Abstract

Federated learning has emerged as a promising paradigm for enabling collaborative machine learning across distributed data sources without requiring raw dataset sharing, thereby enhancing communication efficiency and privacy preservation. FL has shown success in real-world applications, e.g., healthcare, economy, and edge computing. However, real-world deployment of FL systems still faces major challenges, including vulnerability to adversarial attacks, unreliable participating clients, and communication inefficiencies. The project Resilient, Efficient, and Trustworthy Federated Intelligence aims to develop superior federated intelligencethat is robust, scalable, and secure under practical deployment conditions, with both algorithm design and theoretical understanding. The project focuses on three core objectives: - Resilience: Designing federated learning methods that can withstand real-world imperfections and system heterogeneity, ensuring stable and optimal performance even in dynamic and non-ideal environments. - Efficiency: Improving communication and computational efficiency through optimized aggregation, low-rank methods with distributed collaborations, and resource-aware training, enabling federated learning at scale across edge and distributed infrastructures. - Trustworthiness: Protecting privacy and reliability of federated models by integrating mechanisms for secure aggregation, multi-party computation, and homomorphic encryption, for data center, edge clients, and any third party. The project will validate proposed approaches and theoretical frameworks on widely used benchmark datasets in multiple learning tasks, e.g., image classification and natural language processing tasks. By advancing the foundations of secure and robust federated intelligence, this work will contribute to the development of trustworthy distributed AI systems that can be safely deployed in critical real-world settings. Main supervisor: Prof. Mikael Skoglund, KTH