» Articles » PMID: 38600151

Understanding the Errors Made by Artificial Intelligence Algorithms in Histopathology in Terms of Patient Impact

Overview
Journal NPJ Digit Med
Date 2024 Apr 10
PMID 38600151
Authors
Affiliations
Soon will be listed here.
Abstract

An increasing number of artificial intelligence (AI) tools are moving towards the clinical realm in histopathology and across medicine. The introduction of such tools will bring several benefits to diagnostic specialities, namely increased diagnostic accuracy and efficiency, however, as no AI tool is infallible, their use will inevitably introduce novel errors. These errors made by AI tools are, most fundamentally, misclassifications made by a computational algorithm. Understanding of how these translate into clinical impact on patients is often lacking, meaning true reporting of AI tool safety is incomplete. In this Perspective we consider AI diagnostic tools in histopathology, which are predominantly assessed in terms of technical performance metrics such as sensitivity, specificity and area under the receiver operating characteristic curve. Although these metrics are essential and allow tool comparison, they alone give an incomplete picture of how an AI tool's errors could impact a patient's diagnosis, management and prognosis. We instead suggest assessing and reporting AI tool errors from a pathological and clinical stance, demonstrating how this is done in studies on human pathologist errors, and giving examples where available from pathology and radiology. Although this seems a significant task, we discuss ways to move towards this approach in terms of study design, guidelines and regulation. This Perspective seeks to initiate broader consideration of the assessment of AI tool errors in histopathology and across diagnostic specialities, in an attempt to keep patient safety at the forefront of AI tool development and facilitate safe clinical deployment.

Citing Articles

Evaluating the pathological and clinical implications of errors made by an artificial intelligence colon biopsy screening tool.

Evans H, Sivakumar N, Bhanderi S, Graham S, Snead D, Patel A BMJ Open Gastroenterol. 2025; 12(1.

PMID: 39762071 PMC: 11749196. DOI: 10.1136/bmjgast-2024-001649.


Evaluating the Performance of ChatGPT in the Prescribing Safety Assessment: Implications for Artificial Intelligence-Assisted Prescribing.

Bull D, Okaygoun D Cureus. 2024; 16(11):e73003.

PMID: 39634994 PMC: 11617010. DOI: 10.7759/cureus.73003.

References
1.
Habli I, Lawton T, Porter Z . Artificial intelligence in health care: accountability and safety. Bull World Health Organ. 2020; 98(4):251-256. PMC: 7133468. DOI: 10.2471/BLT.19.237487. View

2.
Raab S, Grzybicki D, Janosky J, Zarbo R, Meier F, Jensen C . Clinical impact and frequency of anatomic pathology errors in cancer diagnoses. Cancer. 2005; 104(10):2205-13. DOI: 10.1002/cncr.21431. View

3.
Rakha E, Toss M, Shiino S, Gamble P, Jaroensri R, Mermel C . Current and future applications of artificial intelligence in pathology: a clinical perspective. J Clin Pathol. 2020; 74(7):409-414. DOI: 10.1136/jclinpath-2020-206908. View

4.
Perincheri S, Levi A, Celli R, Gershkovich P, Rimm D, Morrow J . An independent assessment of an artificial intelligence system for prostate cancer detection shows strong diagnostic accuracy. Mod Pathol. 2021; 34(8):1588-1595. PMC: 8295034. DOI: 10.1038/s41379-021-00794-x. View

5.
McGenity C, Bossuyt P, Treanor D . Reporting of Artificial Intelligence Diagnostic Accuracy Studies in Pathology Abstracts: Compliance with STARD for Abstracts Guidelines. J Pathol Inform. 2022; 13:100091. PMC: 9576989. DOI: 10.1016/j.jpi.2022.100091. View