SUPR
Using LLMs to Improve Software Test Cases
Dnr:

NAISS 2024/23-146

Type:

NAISS Small Storage

Principal Investigator:

Gregory Gay

Affiliation:

Chalmers tekniska högskola

Start Date:

2024-02-28

End Date:

2025-03-01

Primary Classification:

10205: Software Engineering

Webpage:

Allocation

Abstract

Large language models (LLMs), machine learning models trained on massive corpora of text - including natural language and source code - are an emerging technology with great potential for language analysis and transformation tasks such as translation, summarization, and decision support. We propose that software test cases can be automatically improved through the use of LLMs. For example, LLMS could: - Improve test readability (e.g., by adding comments or renaming variables) - Remove redundant or unnecessary test steps or assertions - Suggest alternative input values that maintain coverage while increasing the likelihood of triggering a fault - Improve coverage by suggesting method calls to add to the test case In this project, we will explore and evaluate the capabilities of LLMs with regard to test improvements like those listed above.