Achieving the UN Sustainable Development Goals (SDGs) requires integrating knowledge across a variety of disciplines such as climate, energy, health, and social sciences. However, the available information remains fragmented across these disciplines, limiting evidence-based policymaking as the 2030 deadline approaches.
Foundation models offer new opportunities to synthesize such knowledge at scale. However, existing Large Language Models (LLMs) are general-purpose, trained on broad internet text, and not optimized for sustainability research or policy analysis. As a result, they may fail to capture the complexity of SDG synergies, trade-offs, and regional contexts. In this project, we will explore different approaches for efficient and responsible usage of LLMs for sustainable development research. This includes an experimental evaluation of the performance different open-source Small Language Models (SLMs) on related tasks such as SDG allocation.