» Articles » PMID: 38619840

AI-Generated Draft Replies Integrated Into Health Records and Physicians' Electronic Communication

Abstract

Importance: Timely tests are warranted to assess the association between generative artificial intelligence (GenAI) use and physicians' work efforts.

Objective: To investigate the association between GenAI-drafted replies for patient messages and physician time spent on answering messages and the length of replies.

Design, Setting, And Participants: Randomized waiting list quality improvement (QI) study from June to August 2023 in an academic health system. Primary care physicians were randomized to an immediate activation group and a delayed activation group. Data were analyzed from August to November 2023.

Exposure: Access to GenAI-drafted replies for patient messages.

Main Outcomes And Measures: Time spent (1) reading messages, (2) replying to messages, (3) length of replies, and (4) physician likelihood to recommend GenAI drafts. The a priori hypothesis was that GenAI drafts would be associated with less physician time spent reading and replying to messages. A mixed-effects model was used.

Results: Fifty-two physicians participated in this QI study, with 25 randomized to the immediate activation group and 27 randomized to the delayed activation group. A contemporary control group included 70 physicians. There were 18 female participants (72.0%) in the immediate group and 17 female participants (63.0%) in the delayed group; the median age range was 35-44 years in the immediate group and 45-54 years in the delayed group. The median (IQR) time spent reading messages in the immediate group was 26 (11-69) seconds at baseline, 31 (15-70) seconds 3 weeks after entry to the intervention, and 31 (14-70) seconds 6 weeks after entry. The delayed group's median (IQR) read time was 25 (10-67) seconds at baseline, 29 (11-77) seconds during the 3-week waiting period, and 32 (15-72) seconds 3 weeks after entry to the intervention. The contemporary control group's median (IQR) read times were 21 (9-54), 22 (9-63), and 23 (9-60) seconds in corresponding periods. The estimated association of GenAI was a 21.8% increase in read time (95% CI, 5.2% to 41.0%; P = .008), a -5.9% change in reply time (95% CI, -16.6% to 6.2%; P = .33), and a 17.9% increase in reply length (95% CI, 10.1% to 26.2%; P < .001). Participants recognized GenAI's value and suggested areas for improvement.

Conclusions And Relevance: In this QI study, GenAI-drafted replies were associated with significantly increased read time, no change in reply time, significantly increased reply length, and some perceived benefits. Rigorous empirical tests are necessary to further examine GenAI's performance. Future studies should examine patient experience and compare multiple GenAIs, including those with medical training.

Citing Articles

Program Cost and Return on Investment of a Remote Patient Monitoring Program for Hypertension Management.

Zhang D, Millet L, Bellows B, Lee S, Mann D medRxiv. 2025; .

PMID: 39974005 PMC: 11838636. DOI: 10.1101/2025.01.29.25321334.


Bridging the gap: a practical step-by-step approach to warrant safe implementation of large language models in healthcare.

Workum J, van de Sande D, Gommers D, van Genderen M Front Artif Intell. 2025; 8:1504805.

PMID: 39931218 PMC: 11808533. DOI: 10.3389/frai.2025.1504805.


GPT-4 assistance for improvement of physician performance on patient care tasks: a randomized controlled trial.

Goh E, Gallo R, Strong E, Weng Y, Kerman H, Freed J Nat Med. 2025; .

PMID: 39910272 DOI: 10.1038/s41591-024-03456-y.


The TRIPOD-LLM reporting guideline for studies using large language models.

Gallifant J, Afshar M, Ameen S, Aphinyanaphongs Y, Chen S, Cacciamani G Nat Med. 2025; 31(1):60-69.

PMID: 39779929 DOI: 10.1038/s41591-024-03425-5.


Can Large Language Models Aid Caregivers of Pediatric Cancer Patients in Information Seeking? A Cross-Sectional Investigation.

Sezgin E, Jackson D, Kocaballi A, Bibart M, Zupanec S, Landier W Cancer Med. 2025; 14(1):e70554.

PMID: 39776222 PMC: 11705392. DOI: 10.1002/cam4.70554.


References
1.
Ogrinc G, Davies L, Goodman D, Batalden P, Davidoff F, Stevens D . SQUIRE 2.0 (Standards for QUality Improvement Reporting Excellence): revised publication guidelines from a detailed consensus process. BMJ Qual Saf. 2015; 25(12):986-992. PMC: 5256233. DOI: 10.1136/bmjqs-2015-004411. View

2.
Lee P, Bubeck S, Petro J . Benefits, Limits, and Risks of GPT-4 as an AI Chatbot for Medicine. N Engl J Med. 2023; 388(13):1233-1239. DOI: 10.1056/NEJMsr2214184. View

3.
Ayers J, Poliak A, Dredze M, Leas E, Zhu Z, Kelley J . Comparing Physician and Artificial Intelligence Chatbot Responses to Patient Questions Posted to a Public Social Media Forum. JAMA Intern Med. 2023; 183(6):589-596. PMC: 10148230. DOI: 10.1001/jamainternmed.2023.1838. View

4.
Shah N, Entwistle D, Pfeffer M . Creation and Adoption of Large Language Models in Medicine. JAMA. 2023; 330(9):866-869. DOI: 10.1001/jama.2023.14217. View

5.
Tai-Seale M, Olson C, Li J, Chan A, Morikawa C, Durbin M . Electronic Health Record Logs Indicate That Physicians Split Time Evenly Between Seeing Patients And Desktop Medicine. Health Aff (Millwood). 2017; 36(4):655-662. PMC: 5546411. DOI: 10.1377/hlthaff.2016.0811. View