SUPR
Parameter-Efficient Language Adaptation of LLMs
Dnr:

NAISS 2024/22-1172

Type:

NAISS Small Compute

Principal Investigator:

Jenny Kunz

Affiliation:

Linköpings universitet

Start Date:

2024-09-12

End Date:

2025-10-01

Primary Classification:

10208: Language Technology (Computational Linguistics)

Webpage:

Allocation

Abstract

Previous language adaptation work has primarily been focusing on encoder-based Transformer models and classification tasks. Furthermore, for training large language models (LLMs), many parameter-efficient fine-tuning (PEFT) methods have been proposed, but only some have been applied to language adaptation. In this project we perform a comparative study of a broad set of PEFT methods for adapting and specializing both encoder- and decoder-based Transformer LLMs to specific languages, including bottleneck adapters, prompt tuning, LoRA, sparse fine-tuning and BitFit. Based on our findings, we plan to develop specialized language adaptation methods, and give recommendations which method to use based on the amount and quality of available data.