Interpretable Artificial Intelligence across Scales for Next-Generation Cancer Prognostics

Background

Cancer is a very complex disease. Deep understanding of the mechanisms that underpin its development might be beyond the capacity of human analysis. The application of imaging analysis coupled with artificial intelligence to digital pathology slides could break cancer complexity by integrating 'sub-visual' image features that elude the naked eye of pathologists. This could lead to a finer understanding of cancer appearance mechanisms, ultimately improving the prediction of patient outcomes and response to treatment.

Aim

This project aims to move beyond traditional manual grading systems developed by pathologists toward machine learning–based prognostication models trained on real-world clinical outcomes such as survival, recurrence, and treatment response. Traditional methods, which are designed to replicate the assessments of pathologists, are inherently limited by human-level accuracy. Instead, we propose to discover ML-driven biomarkers that may reveal novel prognostic patterns and potentially exceed expert-level performance.

Additionally, we aim to develop broader models capable of learning both cancer-specific and pan-cancer features, moving beyond narrow cancer-specific algorithms. To support clinical adoption, we will enhance model transparency and explainability by integrating language modalities — for example, by generating automated diagnostic reports from whole slide images and incorporating concept learning to provide interpretable outputs.

Funding

  • European Research Council (ERC)

People

Clément Grisi

Clément Grisi

PhD Candidate

Judith Lefkes

Judith Lefkes

PhD Candidate

Khrystyna Faryna

Khrystyna Faryna

PhD Candidate

Geert Litjens

Geert Litjens

Professor

Publications

  • C. Grisi, G. Litjens and J. van der Laak, "Hierarchical Vision Transformers for Context-Aware Prostate Cancer Grading in Whole Slide Images", arXiv:2312.12619, 2023.
  • C. Grisi, G. Litjens and J. van der Laak, "Masked Attention as a Mechanism for Improving Interpretability of Vision Transformers", arXiv:2404.18152, 2024.
  • J. Lefkes, M. D'Amato, S. Sun, G. Litjens and F. Ciompi, "Large Language Models Automate Diagnostic Conclusions Generation from Microscopic Descriptions in Multiple Cancer Types", Laboratory Investigation, 2025;105:103608.
  • J. Lefkes, C. Grisi and G. Litjens, "A Balancing Act: Optimizing Classification and Retrieval in Cross-Modal Vision Models", Medical Imaging with Deep Learning, 2025.