» Articles » PMID: 37771867

ChatGPT in Action: Harnessing Artificial Intelligence Potential and Addressing Ethical Challenges in Medicine, Education, and Scientific Research

Overview
Specialty General Medicine
Date 2023 Sep 29
PMID 37771867
Authors
Affiliations
Soon will be listed here.
Abstract

Artificial intelligence (AI) tools, like OpenAI's Chat Generative Pre-trained Transformer (ChatGPT), hold considerable potential in healthcare, academia, and diverse industries. Evidence demonstrates its capability at a medical student level in standardized tests, suggesting utility in medical education, radiology reporting, genetics research, data optimization, and drafting repetitive texts such as discharge summaries. Nevertheless, these tools should augment, not supplant, human expertise. Despite promising applications, ChatGPT confronts limitations, including critical thinking tasks and generating false references, necessitating stringent cross-verification. Ensuing concerns, such as potential misuse, bias, blind trust, and privacy, underscore the need for transparency, accountability, and clear policies. Evaluations of AI-generated content and preservation of academic integrity are critical. With responsible use, AI can significantly improve healthcare, academia, and industry without compromising integrity and research quality. For effective and ethical AI deployment, collaboration amongst AI developers, researchers, educators, and policymakers is vital. The development of domain-specific tools, guidelines, regulations, and the facilitation of public dialogue must underpin these endeavors to responsibly harness AI's potential.

Citing Articles

Adoption of K-means clustering algorithm in smart city security analysis and mythical experience analysis of urban image.

Han H PLoS One. 2025; 20(3):e0319620.

PMID: 40063658 PMC: 11892831. DOI: 10.1371/journal.pone.0319620.


Chat Generative Pre-Trained Transformer (ChatGPT) in Oral and Maxillofacial Surgery: A Narrative Review on Its Research Applications and Limitations.

On S, Cho S, Park S, Ha J, Yi S, Park I J Clin Med. 2025; 14(4).

PMID: 40004892 PMC: 11856154. DOI: 10.3390/jcm14041363.


Evaluating AI performance in nephrology triage and subspecialty referrals.

Koirala P, Thongprayoon C, Miao J, Garcia Valencia O, Sheikh M, Suppadungsuk S Sci Rep. 2025; 15(1):3455.

PMID: 39870788 PMC: 11772766. DOI: 10.1038/s41598-025-88074-5.


Knowledge, attitude, and perceptions of MENA researchers towards the use of ChatGPT in research: A cross-sectional study.

Jaber S, Hasan H, Alzoubi K, Khabour O Heliyon. 2025; 11(1):e41331.

PMID: 39811375 PMC: 11731567. DOI: 10.1016/j.heliyon.2024.e41331.


Artificial Intelligence Advancements in Cardiomyopathies: Implications for Diagnosis and Management of Arrhythmogenic Cardiomyopathy.

Salavati A, van der Wilt C, Calore M, van Es R, Rampazzo A, van der Harst P Curr Heart Fail Rep. 2024; 22(1):5.

PMID: 39661213 DOI: 10.1007/s11897-024-00688-4.


References
1.
Else H . Abstracts written by ChatGPT fool scientists. Nature. 2023; 613(7944):423. DOI: 10.1038/d41586-023-00056-7. View

2.
Macdonald C, Adeloye D, Sheikh A, Rudan I . Can ChatGPT draft a research article? An example of population-level vaccine effectiveness analysis. J Glob Health. 2023; 13:01003. PMC: 9936200. DOI: 10.7189/jogh.13.01003. View

3.
Sallam M . ChatGPT Utility in Healthcare Education, Research, and Practice: Systematic Review on the Promising Perspectives and Valid Concerns. Healthcare (Basel). 2023; 11(6). PMC: 10048148. DOI: 10.3390/healthcare11060887. View

4.
Stokel-Walker C . ChatGPT listed as author on research papers: many scientists disapprove. Nature. 2023; 613(7945):620-621. DOI: 10.1038/d41586-023-00107-z. View

5.
The Lancet Digital Health . ChatGPT: friend or foe?. Lancet Digit Health. 2023; 5(3):e102. DOI: 10.1016/S2589-7500(23)00023-7. View