This project develops an explainable artificial intelligence (xAI) framework for data-driven disease surveillance, focusing on fairness and transparency. Utilizing the unique Swedish Covid-19 Research (SWECOV) infrastructure, which integrates extensive register data from diverse health sources, the project seeks to enhance the interpretability and legal compliance of AI tools used for early outbreak detection. Recent advancements in Sweden have enabled large-scale healthcare data analysis to uncover seasonal trends, geographic hotspots, and demographic disparities, aiding targeted public health interventions. However, the black-box nature of AI models challenges user trust and regulatory adherence. To address this, we will employ interpretable AI modeling techniques and large language models (LLMs) to provide human-readable explanations of model outputs. This interdisciplinary initiative, involving epidemiology, computer science, law, and infectious disease research, focuses on (1) developing xAI models for detecting unusual health counseling patterns across spatial dimensions, (2) using LLMs to enhance anomaly interpretation and explanation, and (3) analyzing the legal and governance frameworks necessary for the ethical and transparent deployment of such surveillance systems in Sweden.