NAISS
SUPR
NAISS Projects
SUPR
Federated Learning for Large Language Models for KEEPER project
Dnr:

NAISS 2026/4-155

Type:

NAISS Small

Principal Investigator:

Monik Raj Behera

Affiliation:

Högskolan i Halmstad

Start Date:

2026-01-27

End Date:

2027-02-01

Primary Classification:

10208: Natural Language Processing

Allocation

Abstract

This research investigates the application of federated learning architectures to enhance the analytical capabilities of large-scale models within decentralized industrial environments. By simulating a multi-node network, the study evaluates the efficacy of parameter-efficient fine-tuning in balancing data sovereignty with the development of a collective intelligence framework. The simulation focuses on critical performance trade-offs, such as communication overhead, convergence stability, and predictive accuracy, within resource-constrained settings and across varying data distributions. A primary objective is to establish robust protocols for collaborative knowledge synthesis that ensure global model performance remains highly relevant to local operational contexts. Ultimately, this work seeks to define a scalable framework for decentralized model optimization, facilitating the transition toward autonomous, data-driven industrial operations without compromising privacy or architectural efficiency. Main Supervisor: Thorsteinn Rögnvaldsson, Professor at Halmstad University