» Articles » PMID: 38568227

ChatGPT Vs. Neurologists: a Cross-sectional Study Investigating Preference, Satisfaction Ratings and Perceived Empathy in Responses Among People Living with Multiple Sclerosis

Abstract

Background: ChatGPT is an open-source natural language processing software that replies to users' queries. We conducted a cross-sectional study to assess people living with Multiple Sclerosis' (PwMS) preferences, satisfaction, and empathy toward two alternate responses to four frequently-asked questions, one authored by a group of neurologists, the other by ChatGPT.

Methods: An online form was sent through digital communication platforms. PwMS were blind to the author of each response and were asked to express their preference for each alternate response to the four questions. The overall satisfaction was assessed using a Likert scale (1-5); the Consultation and Relational Empathy scale was employed to assess perceived empathy.

Results: We included 1133 PwMS (age, 45.26 ± 11.50 years; females, 68.49%). ChatGPT's responses showed significantly higher empathy scores (Coeff = 1.38; 95% CI = 0.65, 2.11; p > z < 0.01), when compared with neurologists' responses. No association was found between ChatGPT' responses and mean satisfaction (Coeff = 0.03; 95% CI = - 0.01, 0.07; p = 0.157). College graduate, when compared with high school education responder, had significantly lower likelihood to prefer ChatGPT response (IRR = 0.87; 95% CI = 0.79, 0.95; p < 0.01).

Conclusions: ChatGPT-authored responses provided higher empathy than neurologists. Although AI holds potential, physicians should prepare to interact with increasingly digitized patients and guide them on responsible AI use. Future development should consider tailoring AIs' responses to individual characteristics. Within the progressive digitalization of the population, ChatGPT could emerge as a helpful support in healthcare management rather than an alternative.

Citing Articles

Artificial intelligence and science of patient input: a perspective from people with multiple sclerosis.

Helme A, Kalra D, Brichetto G, Peryer G, Vermersch P, Weiland H Front Immunol. 2025; 16:1487709.

PMID: 40034708 PMC: 11872699. DOI: 10.3389/fimmu.2025.1487709.


Performance of ChatGPT in Pediatric Audiology as Rated by Students and Experts.

Ratuszniak A, Gos E, Lorens A, Skarzynski P, Skarzynski H, Jedrzejczak W J Clin Med. 2025; 14(3).

PMID: 39941547 PMC: 11818674. DOI: 10.3390/jcm14030875.


"Having providers who are trained and have empathy is life-saving": Improving primary care communication through thematic analysis with ChatGPT and human expertise.

Stage M, Creamer M, Ruben M PEC Innov. 2025; 6:100371.

PMID: 39866208 PMC: 11758403. DOI: 10.1016/j.pecinn.2024.100371.


ChatGPT, Google, or PINK? Who Provides the Most Reliable Information on Side Effects of Systemic Therapy for Early Breast Cancer?.

Lukac S, Griewing S, Leinert E, Dayan D, Heitmeir B, Wallwiener M Clin Pract. 2025; 15(1).

PMID: 39851791 PMC: 11764162. DOI: 10.3390/clinpract15010008.


Healthcare professionals and the public sentiment analysis of ChatGPT in clinical practice.

Lu L, Zhu Y, Yang J, Yang Y, Ye J, Ai S Sci Rep. 2025; 15(1):1223.

PMID: 39774168 PMC: 11707298. DOI: 10.1038/s41598-024-84512-y.


References
1.
Jiang F, Jiang Y, Zhi H, Dong Y, Li H, Ma S . Artificial intelligence in healthcare: past, present and future. Stroke Vasc Neurol. 2018; 2(4):230-243. PMC: 5829945. DOI: 10.1136/svn-2017-000101. View

2.
Ortiz M, Mallen V, Boquete L, Sanchez-Morla E, Cordon B, Vilades E . Diagnosis of multiple sclerosis using optical coherence tomography supported by artificial intelligence. Mult Scler Relat Disord. 2023; 74:104725. DOI: 10.1016/j.msard.2023.104725. View

3.
Afzal H, Luo S, Ramadan S, Lechner-Scott J . The emerging role of artificial intelligence in multiple sclerosis imaging. Mult Scler. 2020; 28(6):849-858. DOI: 10.1177/1352458520966298. View

4.
Shah N, Entwistle D, Pfeffer M . Creation and Adoption of Large Language Models in Medicine. JAMA. 2023; 330(9):866-869. DOI: 10.1001/jama.2023.14217. View

5.
Goodman R, Patrinely J, Stone Jr C, Zimmerman E, Donald R, Chang S . Accuracy and Reliability of Chatbot Responses to Physician Questions. JAMA Netw Open. 2023; 6(10):e2336483. PMC: 10546234. DOI: 10.1001/jamanetworkopen.2023.36483. View