» Articles » PMID: 38041565

Meaningful Communication but Not Superficial Anthropomorphism Facilitates Human-Automation Trust Calibration: The Human-Automation Trust Expectation Model (HATEM)

Overview
Journal Hum Factors
Specialty Psychology
Date 2023 Dec 2
PMID 38041565
Authors
Affiliations
Soon will be listed here.
Abstract

Objective: The objective was to demonstrate anthropomorphism needs to communicate contextually useful information to increase user confidence and accurately calibrate human trust in automation.

Background: Anthropomorphism is believed to improve human-automation trust but supporting evidence remains equivocal. We test the Human-Automation Trust Expectation Model (HATEM) that predicts improvements to trust calibration and confidence in accepted advice arising from anthropomorphism will be weak unless it aids naturalistic communication of contextually useful information to facilitate prediction of automation failures.

Method: Ninety-eight undergraduates used a submarine periscope simulator to classify ships, aided by the Ship Automated Modelling (SAM) system that was 50% reliable. A between-subjects 2 × 3 design compared SAM (anthropomorphic avatar vs. camera eye) and voice (monotone vs. meaningless vs. meaningful), with the inflections communicating contextually useful information about automated advice regarding certainty and uncertainty.

Results: SAM appearance was rated as more anthropomorphic than camera , and and inflections were both rated more anthropomorphic than . However, for subjective trust, trust calibration, and confidence in accepting SAM advice, there was no evidence of anthropomorphic appearance having any impact, while there was decisive evidence that inflections yielded better outcomes on these trust measures than and inflections.

Conclusion: Anthropomorphism had negligible impact on human-automation trust unless its execution enhanced communication of relevant information that allowed participants to better calibrate expectations of automation performance.

Application: Designers using anthropomorphism to calibrate trust need to consider what contextually useful information will be communicated via anthropomorphic features.

Citing Articles

Transparency improves the accuracy of automation use, but automation confidence information does not.

Tatasciore M, Strickland L, Loft S Cogn Res Princ Implic. 2024; 9(1):67.

PMID: 39379606 PMC: 11461414. DOI: 10.1186/s41235-024-00599-x.


An Exploratory Study of the Potential of Online Counseling for University Students by a Human-Operated Avatar Counselor.

Kiuchi K, Umehara H, Irizawa K, Kang X, Nakataki M, Yoshida M Healthcare (Basel). 2024; 12(13).

PMID: 38998822 PMC: 11241672. DOI: 10.3390/healthcare12131287.


How do humans learn about the reliability of automation?.

Strickland L, Farrell S, Wilson M, Hutchinson J, Loft S Cogn Res Princ Implic. 2024; 9(1):8.

PMID: 38361149 PMC: 10869332. DOI: 10.1186/s41235-024-00533-1.

References
1.
Okamura K, Yamada S . Adaptive trust calibration for human-AI collaboration. PLoS One. 2020; 15(2):e0229132. PMC: 7034851. DOI: 10.1371/journal.pone.0229132. View

2.
Kohn S, de Visser E, Wiese E, Lee Y, Shaw T . Measurement of Trust in Automation: A Narrative Review and Reference Guide. Front Psychol. 2021; 12:604977. PMC: 8562383. DOI: 10.3389/fpsyg.2021.604977. View

3.
Pak R, Fink N, Price M, Bass B, Sturre L . Decision support aids with anthropomorphic characteristics influence trust and performance in younger and older adults. Ergonomics. 2012; 55(9):1059-72. DOI: 10.1080/00140139.2012.691554. View

4.
Capiola A, Hamdan I, Lyons J, Lewis M, Alarcon G, Sycara K . The Effect of Asset Degradation on Trust in Swarms: A Reexamination of System-Wide Trust in Human-Swarm Interaction. Hum Factors. 2022; 66(5):1475-1489. DOI: 10.1177/00187208221145261. View

5.
Bowden V, Griffiths N, Strickland L, Loft S . Detecting a Single Automation Failure: The Impact of Expected (But Not Experienced) Automation Reliability. Hum Factors. 2021; 65(4):533-545. DOI: 10.1177/00187208211037188. View