SUPR
LLM applications to network automation
Dnr:

NAISS 2025/5-426

Type:

NAISS Medium Compute

Principal Investigator:

Carlos Da Silva

Affiliation:

Chalmers tekniska högskola

Start Date:

2025-08-25

End Date:

2026-03-01

Primary Classification:

20203: Communication Systems

Secondary Classification:

10210: Artificial Intelligence

Tertiary Classification:

10208: Natural Language Processing

Webpage:

Allocation

Abstract

The rollout of 6G networks presents significant challenges due to their disaggregated nature, requiring the deployment and meticulous configuration of heterogeneous software entities and applications across diverse, different operator and edge-cloud environments. Current methods, reliant on fixed rules, struggles with this complexity and the difficulty of fully understanding target environments in advance. This makes traditional, human-centric deployment too slow and costly for the scale of 6G. This project proposes an innovative agentic AI system capable of autonomously deploying these 6G software entities and applications in edge-cloud environments. The system will feature AI agents, powered by locally run Large Language Models (LLMs), designed to discover current environmental conditions, probe on-demand for more information, and execute necessary deployment actions (e.g., software installation). These agents will operate online or offline depending on the case, crucial for private operator environments. A knowledge base will provide details about the 6G entities and applications, while a comprehensive test suite will validate successful deployment. By intelligently navigating unknown environments and automating complex configurations and optimization objectives, this solution enables faster, more reliable, performant, and cost-effective 6G rollouts, significantly reducing human intervention and paving the way for self-deploying networks and applications. To effectively equip these AI agents with the necessary intelligence for autonomous deployment, access to high-performance Graphic Processing Units (GPUs) is not merely beneficial, but essential. The core of our agentic system relies on locally-run LLMs. Performing LLM inference, especially for the complex reasoning and decision-making required in this project demands substantial computational power. Traditional CPUs or conventional GPUs are simply inadequate. High-performance GPUs will enable our agents to rapidly process vast amounts of contextual data, understand nuanced network configurations, and output suitable responses. Without robust GPU acceleration, the LLM-powered agents would be severely bottlenecked, undermining the project's ability to deliver a truly autonomous and efficient deployment solution.