Previous language adaptation work has primarily been focusing on encoder-based Transformer models and classification tasks. Furthermore, for training large language models (LLMs), many parameter-efficient fine-tuning (PEFT) methods have been proposed, but only some have been applied to language adaptation. In this project we perform a comparative study of a broad set of PEFT methods for adapting and specializing both encoder- and decoder-based Transformer LLMs to specific languages, including bottleneck adapters, prompt tuning, LoRA, sparse fine-tuning and BitFit. Based on our findings, we plan to develop specialized language adaptation methods, and give recommendations which method to use based on the amount and quality of available data.