» Articles » PMID: 39370900

Using ChatGPT to Provide Patient-Specific Answers to Parental Questions in the PICU

Overview
Journal Pediatrics
Specialty Pediatrics
Date 2024 Oct 7
PMID 39370900
Authors
Affiliations
Soon will be listed here.
Abstract

Objectives: To determine if ChatGPT can incorporate patient-specific information to provide high-quality answers to parental questions in the PICU. We hypothesized that ChatGPT would generate high-quality, patient-specific responses.

Methods: In this cross-sectional study, we generated assessments and plans for 3 PICU patients with respiratory failure, septic shock, and status epilepticus and paired them with 8 typical parental questions. We prompted ChatGPT with instructions, an assessment and plan, and 1 question. Six PICU physicians evaluated the responses for accuracy (1-6), completeness (yes/no), empathy (1-6), and understandability (Patient Education Materials Assessment Tool, PEMAT, 0% to 100%; Flesch-Kincaid grade level). We compared answer quality among scenarios and question types using the Kruskal-Wallis and Fischer's exact tests. We used percent agreement, Cohen's Kappa, and Gwet's agreement coefficient to estimate inter-rater reliability.

Results: All answers incorporated patient details, utilizing them for reasoning in 59% of sentences. Responses had high accuracy (median 5.0, [interquartile range (IQR), 4.0-6.0]), empathy (median 5.0, [IQR, 5.0-6.0]), completeness (97% of all questions), and understandability (PEMAT % median 100, [IQR, 87.5-100]; Flesch-Kincaid level 8.7). Only 4/144 reviewer scores were <4/6 in accuracy, and no response was deemed likely to cause harm. There was no difference in accuracy, completeness, empathy, or understandability among scenarios or question types. We found fair, substantial, and almost perfect agreement among reviewers for accuracy, empathy, and understandability, respectively.

Conclusions: ChatGPT used patient-specific information to provide high-quality answers to parental questions in PICU clinical scenarios.