SUPR
Federated Fine-Tuning of Large Language Models (LLMs) for Domain-Specific Applications
Dnr:

NAISS 2025/5-128

Type:

NAISS Medium Compute

Principal Investigator:

Addi Ait-Mlouk

Affiliation:

Högskolan i Skövde

Start Date:

2025-03-28

End Date:

2026-04-01

Primary Classification:

10201: Computer Sciences

Allocation

Abstract

Large Language Models (LLMs), such as GPT-3, have demonstrated state-of-the-art performance across various natural language processing (NLP) tasks. However, their training and fine-tuning demand extensive computational resources, particularly GPUs, due to the high memory and processing power required. Fine-tuning LLMs on task-specific datasets involves intensive forward and backward computations, making CPU-based processing impractical due to excessive wait times and limited scalability. Additionally, in federated learning (FL) settings, training across distributed nodes further amplifies computational demands, necessitating efficient resource allocation and high-performance computing (HPC) infrastructure. Access to adequate GPU resources is essential for accelerating experimentation, enabling efficient fine-tuning, and advancing NLP research. This proposal seeks HPC resources to facilitate large-scale LLM training and federated learning experiments, ensuring optimal performance, scalability, and research progress.