SUPR
AI-driven analysis of multiplex immunofluorescence microscopy for predictive and prognostic biomarker research in non-small cell lung cancer.
Dnr:

NAISS 2024/23-494

Type:

NAISS Small Storage

Principal Investigator:

Patrick Micke

Affiliation:

Uppsala universitet

Start Date:

2024-08-27

End Date:

2025-09-01

Primary Classification:

10610: Bioinformatics and Systems Biology (methods development to be 10203)

Allocation

Abstract

In recent years, we have accumulated multiplex immunofluorescence images from over 7000 cancer patients, comprising multilayered visuals with various markers delineating cell types and their locations within the cancerous tissue. Additionally, we have incorporated markers for non-cellular structures, providing insights into their spatial relationships with cellular elements. Our objective is to leverage these images to develop advanced AI methods for identifying both cellular and acellular objects, recognizing patterns, and proposing relevant classes. Furthermore, we aim to use these images in interpretable AI methods to reliably predict patients' therapy responses. The project unfolds across three primary dimensions: Automatized Artifact Removal: As a pre-processing step to our current conventional analysis, we manually annotated hundreds of tissue samples, eliminating patterns representing necrotic areas and various artifacts arising from staining, sectioning, and fixation. We plan to develop supervised learning-based approaches that differentiate between diagnosis-relevant and irrelevant tissue regions to enhance efficiency and reliability. Multi-Task Learning for Discriminative Representations: Employing Siamese neural networks, a metric-learning based clustering approach, we aim to discover a set of descriptors grouping tissue samples based on different cancer types linked to specific clinical features (survival, therapy response, tumor stage, etc.). This will yield a feature representation of tumor types, facilitating the identification of distinct cancer types sharing similar morphologies. Confidence Metrics and Explainable AI for Interpretability: Interpretability and visualization of neural networks are crucial for the usability of AI-based tools. By applying approaches that allow us to comprehend what the networks are learning, we can optimize their performance by identifying 'wrong behaviors' at an early stage. The deterministic nature of learning-based methods in cancer diagnostics is a challenge; however, the recent integration of Bayesian learning introduces confidence in AI-based predictions, providing estimates of the reliability of network outputs.