NAISS
SUPR
NAISS Projects
SUPR
Using LLMs to Improve Software Test Readability
Dnr:

NAISS 2026/3-104

Type:

NAISS Medium

Principal Investigator:

Gregory Gay

Affiliation:

Chalmers tekniska högskola

Start Date:

2026-03-01

End Date:

2027-01-01

Primary Classification:

10205: Software Engineering

Webpage:

Allocation

Abstract

Large language models (LLMs), machine learning models trained on massive corpora of text - including natural language and source code - are an emerging technology with great potential for language analysis and transformation tasks such as translation, summarization, and decision support. We propose that software test cases can be automatically improved through the use of LLMs. For example, LLMS could: - Improve test readability (e.g., by adding comments or renaming variables) - Remove redundant or unnecessary test steps or assertions - Suggest alternative input values that maintain coverage while increasing the likelihood of triggering a fault - Improve coverage by suggesting method calls to add to the test case In this project, we will explore and evaluate the capabilities of LLMs with regard to test improvements like those listed above. Currently, we focus primarily on the first two bullet points above. In the initial phase of the project, we developed a framework to improve test readability. We now plan to conclude this project with extensions to the framework, upgrading to more capable models, and running larger-scale experiments. After this phase, we plan to explore the other bullet points.