Clinical Performance Comparators in Audit and Feedback: a Review of Theory and Evidence
Overview
Authors
Affiliations
Background: Audit and feedback (A&F) is a common quality improvement strategy with highly variable effects on patient care. It is unclear how A&F effectiveness can be maximised. Since the core mechanism of action of A&F depends on drawing attention to a discrepancy between actual and desired performance, we aimed to understand current and best practices in the choice of performance comparator.
Methods: We described current choices for performance comparators by conducting a secondary review of randomised trials of A&F interventions and identifying the associated mechanisms that might have implications for effective A&F by reviewing theories and empirical studies from a recent qualitative evidence synthesis.
Results: We found across 146 trials that feedback recipients' performance was most frequently compared against the performance of others (benchmarks; 60.3%). Other comparators included recipients' own performance over time (trends; 9.6%) and target standards (explicit targets; 11.0%), and 13% of trials used a combination of these options. In studies featuring benchmarks, 42% compared against mean performance. Eight (5.5%) trials provided a rationale for using a specific comparator. We distilled mechanisms of each comparator from 12 behavioural theories, 5 randomised trials, and 42 qualitative A&F studies.
Conclusion: Clinical performance comparators in published literature were poorly informed by theory and did not explicitly account for mechanisms reported in qualitative studies. Based on our review, we argue that there is considerable opportunity to improve the design of performance comparators by (1) providing tailored comparisons rather than benchmarking everyone against the mean, (2) limiting the amount of comparators being displayed while providing more comparative information upon request to balance the feedback's credibility and actionability, (3) providing performance trends but not trends alone, and (4) encouraging feedback recipients to set personal, explicit targets guided by relevant information.
Landis-Lewis Z, Stansbury C, Rincon J, Gross C CEUR Workshop Proc. 2025; 3805:L1-L10.
PMID: 39949869 PMC: 11825144.
Impact of visualising healthcare quality performance: a systematic review.
Yang Z, Alveyn E, Dey M, Arumalla N, Russell M, Norton S BMJ Open. 2024; 14(11):e083620.
PMID: 39488428 PMC: 11535674. DOI: 10.1136/bmjopen-2023-083620.
Shuldiner J, Lacroix M, Saragosa M, Reis C, Schwartz K, Gushue S Implement Sci. 2024; 19(1):65.
PMID: 39285305 PMC: 11403851. DOI: 10.1186/s13012-024-01393-5.
Precision feedback: A conceptual model.
Landis-Lewis Z, Janda A, Chung H, Galante P, Cao Y, Krumm A Learn Health Syst. 2024; 8(3):e10419.
PMID: 39036537 PMC: 11257058. DOI: 10.1002/lrh2.10419.
Exploring Anesthesia Provider Preferences for Precision Feedback: Preference Elicitation Study.
Landis-Lewis Z, Andrews C, Gross C, Friedman C, Shah N JMIR Med Educ. 2024; 10:e54071.
PMID: 38889065 PMC: 11185285. DOI: 10.2196/54071.