» Articles » PMID: 2276266

How Good Are Clinical MEDLINE Searches? A Comparative Study of Clinical End-user and Librarian Searches

Overview
Date 1990 Dec 1
PMID 2276266
Citations 42
Authors
Affiliations
Soon will be listed here.
Abstract

The objective of this study was to determine the quality of MEDLINE searches done by physicians, physician trainees, and expert searchers (clinicians and librarians). Its design was an analytic survey with independent replication in a setting of self-service online searching from medical wards, an intensive care unit, a coronary care unit, an emergency room, and an ambulatory clinic in a 300-bed teaching hospital. Participating were all M.D. clinical clerks, house, and attending staff responsible for patients in the above settings. Intervention for all participants consisted of a 2-h small group class and 1-h practice session on MEDLINE searching (GRATEFUL MED) before free access to MEDLINE. Search questions from 104 randomly selected novice searches were given to 1 of 13 clinicians with prior search experience and 1 of 3 librarians to run independent searches (triplicated searches). Measurements and main results from these unique citations of the triplicated searches were sent to expert clinicians to rate for relevance (7-point scale). Recall (number of relevant citations retrieved from an individual search divided by the total number of relevant citations from all searches on the same topic) and precision (proportion of relevant citations retrieved in each search) were calculated. Librarians were significantly better than novices for both. Librarians had equivalent recall to, and better precision than, experienced end-users. Unexpectedly, only 20% of relevant citations were retrieved by more than one search of the set of three, with the conclusion that novice searchers on MEDLINE via GRATEFUL MED after brief training have relatively low recall and precision. Recall improves with experience but precision remains suboptimal. Further research is needed to determine the "learning curve," evaluate training interventions, and explore the non-overlapping retrieval of relevant citations by different searchers.

Citing Articles

The McMaster Health Information Research Unit: Over a Quarter-Century of Health Informatics Supporting Evidence-Based Medicine.

Lokker C, McKibbon K, Afzal M, Navarro T, Linkins L, Haynes R J Med Internet Res. 2024; 26:e58764.

PMID: 39083765 PMC: 11325105. DOI: 10.2196/58764.


Optimal literature search for systematic reviews in surgery.

Goossen K, Tenckhoff S, Probst P, Grummich K, Mihaljevic A, Buchler M Langenbecks Arch Surg. 2017; 403(1):119-129.

PMID: 29209758 DOI: 10.1007/s00423-017-1646-x.


Optimal database combinations for literature searches in systematic reviews: a prospective exploratory study.

Bramer W, Rethlefsen M, Kleijnen J, Franco O Syst Rev. 2017; 6(1):245.

PMID: 29208034 PMC: 5718002. DOI: 10.1186/s13643-017-0644-y.


Development and empirical user-centered evaluation of semantically-based query recommendation for an electronic health record search engine.

Hanauer D, Wu D, Yang L, Mei Q, Murkowski-Steffy K, Vydiswaran V J Biomed Inform. 2017; 67:1-10.

PMID: 28131722 PMC: 5378386. DOI: 10.1016/j.jbi.2017.01.013.


Analysis of PubMed User Sessions Using a Full-Day PubMed Query Log: A Comparison of Experienced and Nonexperienced PubMed Users.

Yoo I, Mosa A JMIR Med Inform. 2015; 3(3):e25.

PMID: 26139516 PMC: 4526974. DOI: 10.2196/medinform.3740.