» Articles » PMID: 30589866

Disease Vocabulary Size As a Surrogate Marker for Physicians' Disease Knowledge Volume

Overview
Journal PLoS One
Date 2018 Dec 28
PMID 30589866
Authors
Affiliations
Soon will be listed here.
Abstract

Objective: Recognizing what physicians know and do not know about a particular disease is one of the keys to designing clinical decision support systems, since these systems can fulfill complementary role by recognizing this boundary. To our knowledge, however, no study has attempted to quantify how many diseases physicians actually know and thus the boundary is unclear. This study explores a method to solve this problem by investigating whether the vocabulary assessment techniques developed in the linguistics field can be applied to assess physicians' knowledge.

Methods: The test design required us to pay special attention to disease knowledge assessment. First, to avoid imposing unnecessary burdens on the physicians, we chose a self-assessment questionnaire that was straightforward to fill out. Second, to prevent overestimation, we used a "pseudo-word" approach: fictitious diseases were included in the questionnaire, and positive responses to them were penalized. Third, we used paper-based tests, rather than computer-based ones, to further prevent participants from cheating by using a search engine. Fourth, we selectively used borderline diseases, i.e., diseases that physicians might or might not know about, rather than well-known or little-known diseases, in the questionnaire.

Results: We collected 102 valid answers from 109 physicians who attended the seminars we conducted. On the basis of these answers, we estimated that the average physician knew of 2008 diseases (95% confidence interval: (1939, 2071)). This preliminary estimation agrees with the guideline for the national license examination in Japan, suggesting that this vocabulary assessment was able to evaluate physicians' knowledge. The survey included physicians with various backgrounds, but there were no significant differences between subgroups. Other implication for researches on clinical decision support and limitation of the sampling method adopted in this study are also discussed, toward more rigorous estimation in future surveys.

References
1.
Miller G . The assessment of clinical skills/competence/performance. Acad Med. 1990; 65(9 Suppl):S63-7. DOI: 10.1097/00001888-199009000-00045. View

2.
ENGLE Jr R . Attempts to use computers as diagnostic aids in medical decision making: a thirty-year experience. Perspect Biol Med. 1992; 35(2):207-19. DOI: 10.1353/pbm.1992.0011. View

3.
Smith R . What clinical information do doctors need?. BMJ. 1996; 313(7064):1062-8. PMC: 2352351. DOI: 10.1136/bmj.313.7064.1062. View

4.
Miller R . Computer-assisted diagnostic decision support: history, challenges, and possible paths forward. Adv Health Sci Educ Theory Pract. 2009; 14 Suppl 1:89-106. DOI: 10.1007/s10459-009-9186-y. View

5.
GORDON B . Terminology and content of the medical record. Comput Biomed Res. 1970; 3(5):436-44. DOI: 10.1016/0010-4809(70)90005-4. View