» Articles » PMID: 26174517

Difficulty in Detecting Discrepancies in a Clinical Trial Report: 260-reader Evaluation

Overview
Journal Int J Epidemiol
Specialty Public Health
Date 2015 Jul 16
PMID 26174517
Citations 2
Authors
Affiliations
Soon will be listed here.
Abstract

Background: Scientific literature can contain errors. Discrepancies, defined as two or more statements or results that cannot both be true, may be a signal of problems with a trial report. In this study, we report how many discrepancies are detected by a large panel of readers examining a trial report containing a large number of discrepancies.

Methods: We approached a convenience sample of 343 journal readers in seven countries, and invited them in person to participate in a study. They were asked to examine the tables and figures of one published article for discrepancies. 260 participants agreed, ranging from medical students to professors. The discrepancies they identified were tabulated and counted. There were 39 different discrepancies identified. We evaluated the probability of discrepancy identification, and whether more time spent or greater participant experience as academic authors improved the ability to detect discrepancies.

Results: Overall, 95.3% of discrepancies were missed. Most participants (62%) were unable to find any discrepancies. Only 11.5% noticed more than 10% of the discrepancies. More discrepancies were noted by participants who spent more time on the task (Spearman's ρ = 0.22, P < 0.01), and those with more experience of publishing papers (Spearman's ρ = 0.13 with number of publications, P = 0.04).

Conclusions: Noticing discrepancies is difficult. Most readers miss most discrepancies even when asked specifically to look for them. The probability of a discrepancy evading an individual sensitized reader is 95%, making it important that, when problems are identified after publication, readers are able to communicate with each other. When made aware of discrepancies, the majority of readers support editorial action to correct the scientific record.

Citing Articles

Challenges in the design, conduct, analysis, and reporting in randomized clinical trial studies: A systematic review.

Varse F, Janani L, Moradi Y, Solaymani-Dodaran M, Baradaran H, Rimaz S Med J Islam Repub Iran. 2019; 33:37.

PMID: 31456961 PMC: 6708114. DOI: 10.34171/mjiri.33.37.


The Possibility of Systematic Research Fraud Targeting Under-Studied Human Genes: Causes, Consequences, and Potential Solutions.

Byrne J, Grima N, Capes-Davis A, Labbe C Biomark Insights. 2019; 14:1177271919829162.

PMID: 30783377 PMC: 6366001. DOI: 10.1177/1177271919829162.

References
1.
Cole G, Francis D . Comparable or STAR-heartlingly different left ventricular ejection fraction at baseline?. Eur J Heart Fail. 2010; 13(2):234. DOI: 10.1093/eurjhf/hfq214. View

2.
Strauer B, Yousef M, Schannwell C . The acute and long-term effects of intracoronary Stem cell Transplantation in 191 patients with chronic heARt failure: the STAR-heart study. Eur J Heart Fail. 2010; 12(7):721-9. DOI: 10.1093/eurjhf/hfq095. View

3.
Chopra V, Eagle K . Perioperative mischief: the price of academic misconduct. Am J Med. 2012; 125(10):953-5. DOI: 10.1016/j.amjmed.2012.03.014. View

4.
Nijjer S, Pabari P, Stegemann B, Palmieri V, Leyva F, Linde C . The limit of plausibility for predictors of response: application to biventricular pacing. JACC Cardiovasc Imaging. 2012; 5(10):1046-65. DOI: 10.1016/j.jcmg.2012.07.010. View

5.
Sen S, Davies J, Malik I, Foale R, Mikhail G, Hadjiloizou N . Why does primary angioplasty not work in registries? Quantifying the susceptibility of real-world comparative effectiveness data to allocation bias. Circ Cardiovasc Qual Outcomes. 2012; 5(6):759-66. DOI: 10.1161/CIRCOUTCOMES.112.966853. View