SUPR
Efficient Knowledge Distillation via REFLECT for Scalable Language Model Adaptation
Dnr:

NAISS 2025/22-598

Type:

NAISS Small Compute

Principal Investigator:

Hoda FakharzadehJahromy

Affiliation:

Linköpings universitet

Start Date:

2025-04-16

End Date:

2026-05-01

Primary Classification:

10206: Computer Engineering

Webpage:

Allocation

Abstract

This project explores a new way of making large language models more efficient by using a method we call REFLECT — Reverse-Enhanced Framework for Learning Embeddings with Consistent Transfer. The goal is to help smaller “student” models learn from larger “teacher” models by aligning their internal representations through embedding space transformations and cross-attention. We're still developing and testing the approach, but our plan is to evaluate it on reasoning tasks and compare it to standard fine-tuning. If it works as expected, REFLECT could offer a lighter, more targeted way to update and adapt language models without the heavy cost of full retraining.