SUPR
AI-driven analysis of multiplex immunofluorescence and HE staining for predictive biomarker research in non-small cell lung cancer
Dnr:

NAISS 2024/22-421

Type:

NAISS Small Compute

Principal Investigator:

Hui Yu

Affiliation:

Uppsala universitet

Start Date:

2024-03-21

End Date:

2025-04-01

Primary Classification:

40302: Pathobiology

Webpage:

Allocation

Abstract

In recent years, we have accumulated multiplex immunofluorescence images from over 7000 cancer patients, comprising multilayered visuals with various markers delineating cell types and their locations within the cancerous tissue. Additionally, we have incorporated markers for non-cellular structures, providing insights into their spatial relationships with cellular elements. Our objective is to leverage these images to develop advanced AI methods for the identification of both cellular and acellular objects, recognizing patterns, and proposing relevant classes. Furthermore, we aim to utilize these images in interpretable AI methods to predict patients' therapy responses reliably. The project unfolds across three primary dimensions: Automatized Artifact Removal: As a pre-processing step to our current conventional analysis, we manually annotated hundreds of tissue samples, eliminating patterns representing necrotic areas and various artifacts arising from staining, sectioning, and fixation. To enhance efficiency and reliability, we plan to develop supervised learning-based approaches that differentiate between diagnosis-relevant and irrelevant tissue regions. Multi-Task Learning for Discriminative Representations: Employing Siamese neural networks, a metric-learning based clustering approach, we aim to discover a set of descriptors grouping tissue samples based on different cancer types linked to specific clinical features (survival, therapy response, tumor stage, etc.). This will yield a feature representation of tumor types, facilitating the identification of distinct cancer types sharing similar morphologies. Confidence Metrics and Explainable AI for Interpretability: Interpretability and visualization of neural networks are crucial for the usability of AI-based tools. By applying approaches that allow us to comprehend what the networks are learning, we can optimize their performance by identifying 'wrong behaviors' at an early stage. The deterministic nature of learning-based methods in cancer diagnostics is a challenge; however, the recent integration of Bayesian learning introduces confidence in AI-based predictions, providing estimates of the reliability of network outputs.