This research investigates the application of federated learning architectures to enhance the analytical capabilities of large-scale models within decentralized industrial environments. By simulating a multi-node network, the study evaluates the efficacy of parameter-efficient fine-tuning in balancing data sovereignty with the development of a collective intelligence framework. The simulation focuses on critical performance trade-offs, such as communication overhead, convergence stability, and predictive accuracy, within resource-constrained settings and across varying data distributions. A primary objective is to establish robust protocols for collaborative knowledge synthesis that ensure global model performance remains highly relevant to local operational contexts. Ultimately, this work seeks to define a scalable framework for decentralized model optimization, facilitating the transition toward autonomous, data-driven industrial operations without compromising privacy or architectural efficiency.
Main Supervisor: Thorsteinn Rögnvaldsson, Professor at Halmstad University