SUPR
Exploring large language models
Dnr:

NAISS 2024/22-1097

Type:

NAISS Small Compute

Principal Investigator:

Denitsa Saynova

Affiliation:

Chalmers tekniska högskola

Start Date:

2024-09-01

End Date:

2025-09-01

Primary Classification:

10208: Language Technology (Computational Linguistics)

Webpage:

Allocation

Abstract

This project will focus on how large language models represent different aspects of language - e.g. facts, form, ideology. One aspect that will be investigated is associations learned and used by language models related to attitudes and views studied in the behavioural and social sciences. We plan to investigate these mainly through prompting pre-trained models. From a technical perspective, we wish to investigate models' sensitivity to prompts, refusal, topics, persona and other phenomena that may influence if, how and which learned associations a language model presents during inference. We aim to establish if language models exhibit consistent and robust viewpoints and what factors may contribute to their expression.