SUPR
Deep Learning Models for Burned Area Mapping
Dnr:

NAISS 2025/22-671

Type:

NAISS Small Compute

Principal Investigator:

Eric Brune

Affiliation:

Kungliga Tekniska högskolan

Start Date:

2025-04-26

End Date:

2026-05-01

Primary Classification:

10505: Geophysics (Applications with Earth Observation at 20703)

Webpage:

Allocation

Abstract

Accurate and timely burned area mapping is crucial for assessing wildfire impacts, guiding response efforts, and understanding ecological recovery and climate change effects. However, current satellite-based mapping methods face significant challenges. Optical sensors like Sentinel-2 MSI provide valuable spectral information but are frequently hindered by cloud cover and smoke, especially during and immediately after fire events. Synthetic Aperture Radar (SAR) sensors, such as Sentinel-1, penetrate these obstructions but produce data that is inherently noisy (speckle) and complex to interpret directly for accurate boundary delineation. This project aims to develop and evaluate advanced deep learning models to overcome these limitations and enhance burned area mapping capabilities. We will investigate two complementary approaches leveraging multi-modal and multi-temporal satellite data. The first approach focuses on SAR-to-optical (S2O) image translation using conditional diffusion models. Specifically, we will develop architectures like Swin-U-DiT, integrating efficient Transformer modules within a U-Net backbone, to generate high-fidelity, cloud-free optical-like imagery from multi-temporal Sentinel-1 SAR and pre-fire Sentinel-2 MSI data. The utility of these translated images will be assessed through downstream burned area segmentation tasks. The second approach explores multi-task learning with context-aware diffusion models. This involves designing models, such as FireSR-DDPM, capable of simultaneously performing super-resolution of moderate-resolution imagery (e.g., MODIS) and burned area segmentation, conditioned on high-resolution pre-fire context (e.g., Sentinel-2). This aims to provide daily, high-resolution monitoring capabilities by fusing the temporal frequency of MODIS with the spatial detail potential derived from Sentinel-2. Models will be trained and rigorously evaluated on extensive datasets comprising thousands of wildfire events, primarily from Canada, using established ground truth datasets like the National Burned Area Composite (NBAC). Performance will be measured using standard image generation (FID, LPIPS) and segmentation (IoU, F1-score) metrics, and compared against baseline methods. The project's outcomes will be validated deep learning frameworks and insights into their effectiveness and computational feasibility, contributing robust tools for improved wildfire monitoring and management.