Integrating AI into clinical workflows, particularly in medical image analysis, presents challenges in reliability and interpretability. While AI can improve diagnostic accuracy, the true potential lies in creating joint human-AI systems that outperform either alone. To realize this, we must develop stronger AI models while addressing key issues like model uncertainty and explainability to ensure effective clinician-AI collaboration. This project aims to advance joint human-AI systems, focusing on improving patient outcomes through enhanced human-AI collaboration.
On the one hand, there is a need to develop stronger AI models tailored for medical image analysis. However, strong models alone are insufficient; effective integration into clinical practice requires systems that provide insights into AI decision-making and patient routing. AI must communicate uncertainty, enabling clinicians to focus on ambiguous cases, and offer explainable predictions. By combining these elements, we aim to enhance the efficiency and accuracy of medical workflows.
In this project, we aim to tackle these challenges with the ultimate goal of creating joint human-AI systems that combine advanced AI capabilities with human expertise, resulting in more reliable, trustworthy, and effective solutions for medical image analysis. On the AI development side, we will focus on developing better models tailored specifically for medical imaging tasks, leveraging foundation models through self-supervised learning (SSL). Additionally, we will enhance human-AI collaboration by improving model explainability and uncertainty estimation, alongside intelligent patient routing systems. These systems will ensure that cases requiring human expertise are directed to clinicians, while routine tasks are efficiently handled by AI.