SUPR
Trustworthy AI-based decision support in cancer diagnostics
Dnr:

NAISS 2024/22-1260

Type:

NAISS Small Compute

Principal Investigator:

Joakim Lindblad

Affiliation:

Uppsala universitet

Start Date:

2024-10-02

End Date:

2025-10-01

Primary Classification:

20603: Medical Image Processing

Allocation

Abstract

To reach successful implementation of AI-based decision support in healthcare it is of highest priority to enhance trust in the system outputs. One reason for lack of trust is the lack of interpretability of the complex non-linear decision making process. A way to build trust is thus to improve humans’ understanding of the process, which drives research within the field of Explainable AI. Another reason for reduced trust is the typically poor handling of new and unseen data of today’s AI-systems. An important path toward increased trust is, therefore, to enable AI systems to assess their own hesitation. Understanding what a model “knows” and what it “does not know” is a critical part of a machine learning system. For a successful implementation of AI in healthcare and life sciences, it is imperative to acknowledge the need for cooperation of human experts and AI-based decision making systems: Deep learning methods, and AI systems, should not replace, but rather augment clinicians and researchers. This project aims to facilitate understandable, reliable and trustworthy utilization of AI in healthcare, empowering the human medical professionals to interpret and interact with the AI-based decision support system.