NAISS
SUPR
NAISS Projects
SUPR
Context-Aware Time Series Foundational Modeling
Dnr:

NAISS 2025/22-1603

Type:

NAISS Small Compute

Principal Investigator:

Niclas Wölner-Hanssen

Affiliation:

Högskolan i Halmstad

Start Date:

2025-11-26

End Date:

2026-12-01

Primary Classification:

10210: Artificial Intelligence

Webpage:

Allocation

Abstract

Time-series data, characterized by their sequential order and temporal dependencies, appear in virtually every scientific and industrial domain—retail, finance, manufacturing, healthcare, and the natural sciences. Inspired by the success of large language models (LLMs) in natural language processing, recent research has increasingly explored the development of Time-Series Foundation Models (TSFMs) with contributions from top research labs such as Google Research, IBM Research, AWS AI Labs etc. Much like LLMs, these models are trained on extensive corpora of heterogeneous time-series data, with the ambition that a single pre-trained model will generalize to new, out- of-distribution (OOD) forecasting or classification tasks in a zero- or few-shot setting. However, time-series data differ fundamentally from language. Human language is governed by a shared grammar, vocabulary, and semantic structure, which allows knowledge to transfer naturally across both domains and tasks—for instance, from news articles to product reviews, or from translation to summarization. These forms of transfer are possible because the basic rules of linguistic composition remain consistent across contexts. Time-series domains, by contrast, arise from heterogeneous causal systems: the processes that generate energy demand, stock prices, or humidity levels differ not just in magnitude or variability, but in their underlying causal mechanisms. A model that treats such systems as statistically interchangeable risks conflating their causal logic. Before attempting to generalize across domains, a TSFM must therefore recognize that each domain operates under its own structural and causal assumptions. This project aims to develop a new class of Context-Aware Time Series Foundation Models (TSFMs) capable of generalizing across heterogeneous domains while adapting to local temporal dynamics. The central insight is that foundation models for time series must learn not only to generalize, but also to discern: to identify when knowledge is transferable, when it must be adapted, and when it should be ignored. Achieving this requires mechanisms that separate domain-specific structure from universal temporal reasoning. To address these challenges, the project will explore a hypernetwork-based approach that allows the model to adapt flexibly across domains and temporal conditions. The core idea is to separate general temporal reasoning from domain-specific behavior by using conditioning mechanisms that adjust the model based on contextual information. One component focuses on capturing cross-domain differences using metadata and statistical characteristics, enabling the model to account for structural distinctions between datasets. Another component adapts the model within a domain by responding to local temporal context, supporting robustness to non-stationarity, structural changes, and variable lag relationships. Together, these adaptive mechanisms aim to improve generalization across heterogeneous time-series domains while maintaining sensitivity to domain-specific dynamics.