» Articles » PMID: 21077562

Complacency and Bias in Human Use of Automation: an Attentional Integration

Overview
Journal Hum Factors
Specialty Psychology
Date 2010 Nov 17
PMID 21077562
Citations 83
Authors
Affiliations
Soon will be listed here.
Abstract

Objective: Our aim was to review empirical studies of complacency and bias in human interaction with automated and decision support systems and provide an integrated theoretical model for their explanation.

Background: Automation-related complacency and automation bias have typically been considered separately and independently.

Methods: Studies on complacency and automation bias were analyzed with respect to the cognitive processes involved.

Results: Automation complacency occurs under conditions of multiple-task load, when manual tasks compete with the automated task for the operator's attention. Automation complacency is found in both naive and expert participants and cannot be overcome with simple practice. Automation bias results in making both omission and commission errors when decision aids are imperfect. Automation bias occurs in both naive and expert participants, cannot be prevented by training or instructions, and can affect decision making in individuals as well as in teams. While automation bias has been conceived of as a special case of decision bias, our analysis suggests that it also depends on attentional processes similar to those involved in automation-related complacency.

Conclusion: Complacency and automation bias represent different manifestations of overlapping automation-induced phenomena, with attention playing a central role. An integrated model of complacency and automation bias shows that they result from the dynamic interaction of personal, situational, and automation-related characteristics.

Application: The integrated model and attentional synthesis provides a heuristic framework for further research on complacency and automation bias and design options for mitigating such effects in automated and decision support systems.

Citing Articles

Transparent systems, opaque results: a study on automation compliance and task performance.

Pharmer R, Wickens C, Clegg B Cogn Res Princ Implic. 2025; 10(1):8.

PMID: 39982562 PMC: 11845646. DOI: 10.1186/s41235-025-00619-4.


[Ethical aspects of the development, authorization and implementation of applications in ophthalmology based on artificial intelligence : Statement of the German Ophthalmological Society (DOG) and the Professional Association of German...].

Ophthalmologie. 2025; .

PMID: 39964395 DOI: 10.1007/s00347-025-02189-8.


Assessing Risk in Implementing New Artificial Intelligence Triage Tools-How Much Risk is Reasonable in an Already Risky World?.

Nord-Bronzyk A, Savulescu J, Ballantyne A, Braunack-Mayer A, Krishnaswamy P, Lysaght T Asian Bioeth Rev. 2025; 17(1):187-205.

PMID: 39896084 PMC: 11785855. DOI: 10.1007/s41649-024-00348-8.


Eye-Tracking Characteristics: Unveiling Trust Calibration States in Automated Supervisory Control Tasks.

Wang K, Hou W, Ma H, Hong L Sensors (Basel). 2025; 24(24.

PMID: 39771683 PMC: 11679395. DOI: 10.3390/s24247946.


Institutionalised distrust and human oversight of artificial intelligence: towards a democratic design of AI governance under the European Union AI Act.

Laux J AI Soc. 2024; 39(6):2853-2866.

PMID: 39640298 PMC: 11614927. DOI: 10.1007/s00146-023-01777-z.