Evaluating Consensus Among Physicians in Medical Knowledge Base Construction
Overview
Authors
Affiliations
This study evaluates inter-author variability in knowledge base construction. Seven board-certified internists independently profiled "acute perinephric abscess", using as reference material a set of 109 peer-reviewed articles. Each participant created a list of findings associated with the disease, estimated the predictive value and sensitivity of each finding, and assessed the pertinence of each article for making each judgment. Agreement in finding selection was significantly different from chance: seven, six, and five participants selected the same finding 78.6, 9.8, and 1.6 times more often than predicted by chance. Findings with the highest sensitivity were most likely to be included by all participants. The selection of supporting evidence from the medical literature was significantly related to each physician's agreement with the majority. The study shows that, with appropriate guidance, physicians can reproducibly extract information from the medical literature, and thus established a foundation for multi-author knowledge base construction.
Evaluating a large language model's ability to answer clinicians' requests for evidence summaries.
Blasingame M, Koonce T, Williams A, Giuse D, Su J, Krump P J Med Libr Assoc. 2025; 113(1):65-77.
PMID: 39975503 PMC: 11835037. DOI: 10.5195/jmla.2025.1985.
A model for evaluating interface terminologies.
Rosenbloom S, Miller R, Johnson K, Elkin P, Brown S J Am Med Inform Assoc. 2007; 15(1):65-76.
PMID: 17947616 PMC: 2274877. DOI: 10.1197/jamia.M2506.
Rosenbloom S, Miller R, Johnson K, Elkin P, Brown S J Am Med Inform Assoc. 2006; 13(3):277-88.
PMID: 16501181 PMC: 1513664. DOI: 10.1197/jamia.M1957.
Hripcsak G, Wilcox A J Am Med Inform Assoc. 2001; 9(1):1-15.
PMID: 11751799 PMC: 349383. DOI: 10.1136/jamia.2002.0090001.
Medical informatics and the concept of disease.
Schaffner K Theor Med Bioeth. 2000; 21(1):85-101.
PMID: 10927970 DOI: 10.1023/a:1009901115043.