» Articles » PMID: 38248048

ChatGPT's Accuracy on Magnetic Resonance Imaging Basics: Characteristics and Limitations Depending on the Question Type

Overview
Specialty Radiology
Date 2024 Jan 22
PMID 38248048
Authors
Affiliations
Soon will be listed here.
Abstract

Our study aimed to assess the accuracy and limitations of ChatGPT in the domain of MRI, focused on evaluating ChatGPT's performance in answering simple knowledge questions and specialized multiple-choice questions related to MRI. A two-step approach was used to evaluate ChatGPT. In the first step, 50 simple MRI-related questions were asked, and ChatGPT's answers were categorized as correct, partially correct, or incorrect by independent researchers. In the second step, 75 multiple-choice questions covering various MRI topics were posed, and the answers were similarly categorized. The study utilized Cohen's kappa coefficient for assessing interobserver agreement. ChatGPT demonstrated high accuracy in answering straightforward MRI questions, with over 85% classified as correct. However, its performance varied significantly across multiple-choice questions, with accuracy rates ranging from 40% to 66.7%, depending on the topic. This indicated a notable gap in its ability to handle more complex, specialized questions requiring deeper understanding and context. In conclusion, this study critically evaluates the accuracy of ChatGPT in addressing questions related to Magnetic Resonance Imaging (MRI), highlighting its potential and limitations in the healthcare sector, particularly in radiology. Our findings demonstrate that ChatGPT, while proficient in responding to straightforward MRI-related questions, exhibits variability in its ability to accurately answer complex multiple-choice questions that require more profound, specialized knowledge of MRI. This discrepancy underscores the nuanced role AI can play in medical education and healthcare decision-making, necessitating a balanced approach to its application.

References
1.
Sarbay I, Bozdereli Berikol G, Ozturan I . Performance of emergency triage prediction of an open access natural language processing based chatbot application (ChatGPT): A preliminary, scenario-based cross-sectional study. Turk J Emerg Med. 2023; 23(3):156-161. PMC: 10389099. DOI: 10.4103/tjem.tjem_79_23. View

2.
Bhayana R, Krishna S, Bleakney R . Performance of ChatGPT on a Radiology Board-style Examination: Insights into Current Strengths and Limitations. Radiology. 2023; 307(5):e230582. DOI: 10.1148/radiol.230582. View

3.
Mesko B, Topol E . The imperative for regulatory oversight of large language models (or generative AI) in healthcare. NPJ Digit Med. 2023; 6(1):120. PMC: 10326069. DOI: 10.1038/s41746-023-00873-0. View

4.
Mago J, Sharma M . The Potential Usefulness of ChatGPT in Oral and Maxillofacial Radiology. Cureus. 2023; 15(7):e42133. PMC: 10355343. DOI: 10.7759/cureus.42133. View

5.
Al-Hwsali A, Alsaadi B, Abdi N, Khatab S, Alzubaidi M, Solaiman B . Scoping Review: Legal and Ethical Principles of Artificial Intelligence in Public Health. Stud Health Technol Inform. 2023; 305:640-643. DOI: 10.3233/SHTI230579. View