This project will focus on how large language models represent different aspects of language - e.g. facts, form, ideology. One aspect that will be investigated is factual knowledge representation and recall in semi-parametric and fully parametric models. This will be based on querying pre-trained models for factual knowledge and investigating their performance in different cases by interventions and possibly some explainability methods. Some of the models that might be explored include Atlas (Izacard et al., 2022), Llama (Touvron et al., 2023), TIARA (Shu et al., EMNLP 2022), but also possibly other SoTA generative or knowledge base question answering systems. The project will also explore the explainability of models, trying to investigate the utility of current approaches and develop new ones that can improve model prediction understanding.