» Articles » PMID: 24581700

Automation Bias: Empirical Results Assessing Influencing Factors

Overview
Date 2014 Mar 4
PMID 24581700
Citations 30
Authors
Affiliations
Soon will be listed here.
Abstract

Objective: To investigate the rate of automation bias - the propensity of people to over rely on automated advice and the factors associated with it. Tested factors were attitudinal - trust and confidence, non-attitudinal - decision support experience and clinical experience, and environmental - task difficulty. The paradigm of simulated decision support advice within a prescribing context was used.

Design: The study employed within participant before-after design, whereby 26 UK NHS General Practitioners were shown 20 hypothetical prescribing scenarios with prevalidated correct and incorrect answers - advice was incorrect in 6 scenarios. They were asked to prescribe for each case, followed by being shown simulated advice. Participants were then asked whether they wished to change their prescription, and the post-advice prescription was recorded.

Measurements: Rate of overall decision switching was captured. Automation bias was measured by negative consultations - correct to incorrect prescription switching.

Results: Participants changed prescriptions in 22.5% of scenarios. The pre-advice accuracy rate of the clinicians was 50.38%, which improved to 58.27% post-advice. The CDSS improved the decision accuracy in 13.1% of prescribing cases. The rate of automation bias, as measured by decision switches from correct pre-advice, to incorrect post-advice was 5.2% of all cases - a net improvement of 8%. More immediate factors such as trust in the specific CDSS, decision confidence, and task difficulty influenced rate of decision switching. Lower clinical experience was associated with more decision switching. Age, DSS experience and trust in CDSS generally were not significantly associated with decision switching.

Conclusions: This study adds to the literature surrounding automation bias in terms of its potential frequency and influencing factors.

Citing Articles

Adaptive questionnaires for facilitating patient data entry in clinical decision support systems: methods and application to STOPP/START v2.

Jean-Baptiste L, Abdelmalek M, Romain L, Romain L, Stefan D, Karima S BMC Med Inform Decis Mak. 2024; 24(1):326.

PMID: 39501252 PMC: 11539734. DOI: 10.1186/s12911-024-02742-6.


Assessing clinical medicine students' acceptance of large language model: based on technology acceptance model.

Liu F, Chang X, Zhu Q, Huang Y, Li Y, Wang H BMC Med Educ. 2024; 24(1):1251.

PMID: 39490999 PMC: 11533422. DOI: 10.1186/s12909-024-06232-1.


Artificial intelligence and radiographer preliminary image evaluation: What might the future hold for radiographers providing x-ray interpretation in the acute setting?.

Rainey C J Med Radiat Sci. 2024; 71(4):495-498.

PMID: 39304330 PMC: 11638352. DOI: 10.1002/jmrs.821.


Effects of machine learning errors on human decision-making: manipulations of model accuracy, error types, and error importance.

Matzen L, Gastelum Z, Howell B, Divis K, Stites M Cogn Res Princ Implic. 2024; 9(1):56.

PMID: 39183209 PMC: 11345344. DOI: 10.1186/s41235-024-00586-2.


Judgments of Difficulty (JODs) While Observing an Automated System Support the Media Equation and Unique Agent Hypotheses.

Driggs J, Vangsness L Hum Factors. 2024; 67(4):347-366.

PMID: 39155398 PMC: 11874496. DOI: 10.1177/00187208241273379.