» Articles » PMID: 39421804

A Framework for Interpretability in Machine Learning for Medical Imaging

Overview
Journal IEEE Access
Date 2024 Oct 18
PMID 39421804
Authors
Affiliations
Soon will be listed here.
Abstract

Interpretability for machine learning models in medical imaging (MLMI) is an important direction of research. However, there is a general sense of murkiness in what interpretability means. Why does the need for interpretability in MLMI arise? What goals does one actually seek to address when interpretability is needed? To answer these questions, we identify a need to formalize the goals and elements of interpretability in MLMI. By reasoning about real-world tasks and goals common in both medical image analysis and its intersection with machine learning, we identify five core elements of interpretability: localization, visual recognizability, physical attribution, model transparency, and actionability. From this, we arrive at a framework for interpretability in MLMI, which serves as a step-by-step guide to approaching interpretability in this context. Overall, this paper formalizes interpretability needs in the context of medical imaging, and our applied perspective clarifies concrete MLMI-specific goals and considerations in order to guide method design and improve real-world usage. Our goal is to provide practical and didactic information for model designers and practitioners, inspire developers of models in the medical imaging field to reason more deeply about what interpretability is achieving, and suggest future directions of interpretability research.

Citing Articles

Interpretable machine learning to evaluate relationships between DAO/DAOA (pLG72) protein data and features in clinical assessments, functional outcome, and cognitive function in schizophrenia patients.

Lin C, Lin E, Lane H Schizophrenia (Heidelb). 2025; 11(1):27.

PMID: 39987274 PMC: 11846841. DOI: 10.1038/s41537-024-00548-z.

References
1.
Spak D, Plaxco J, Santiago L, Dryden M, Dogan B . BI-RADS fifth edition: A summary of changes. Diagn Interv Imaging. 2017; 98(3):179-190. DOI: 10.1016/j.diii.2017.01.001. View

2.
Gillies R, Kinahan P, Hricak H . Radiomics: Images Are More than Pictures, They Are Data. Radiology. 2015; 278(2):563-77. PMC: 4734157. DOI: 10.1148/radiol.2015151169. View

3.
Erickson B, Korfiatis P, Akkus Z, Kline T . Machine Learning for Medical Imaging. Radiographics. 2017; 37(2):505-515. PMC: 5375621. DOI: 10.1148/rg.2017160130. View

4.
Rudie J, Duda J, Duong M, Chen P, Xie L, Kurtz R . Brain MRI Deep Learning and Bayesian Inference System Augments Radiology Resident Performance. J Digit Imaging. 2021; 34(4):1049-1058. PMC: 8455800. DOI: 10.1007/s10278-021-00470-1. View

5.
Lamy J, Sekar B, Guezennec G, Bouaud J, Seroussi B . Explainable artificial intelligence for breast cancer: A visual case-based reasoning approach. Artif Intell Med. 2019; 94:42-53. DOI: 10.1016/j.artmed.2019.01.001. View