» Articles » PMID: 33137717

A Tale of Two Explanations: Enhancing Human Trust by Explaining Robot Behavior

Overview
Journal Sci Robot
Date 2020 Nov 2
PMID 33137717
Citations 9
Authors
Affiliations
Soon will be listed here.
Abstract

The ability to provide comprehensive explanations of chosen actions is a hallmark of intelligence. Lack of this ability impedes the general acceptance of AI and robot systems in critical tasks. This paper examines what forms of explanations best foster human trust in machines and proposes a framework in which explanations are generated from both functional and mechanistic perspectives. The robot system learns from human demonstrations to open medicine bottles using (i) an embodied haptic prediction model to extract knowledge from sensory feedback, (ii) a stochastic grammar model induced to capture the compositional structure of a multistep task, and (iii) an improved Earley parsing algorithm to jointly leverage both the haptic and grammar models. The robot system not only shows the ability to learn from human demonstrators but also succeeds in opening new, unseen bottles. Using different forms of explanations generated by the robot system, we conducted a psychological experiment to examine what forms of explanations best foster human trust in the robot. We found that comprehensive and real-time visualizations of the robot's internal decisions were more effective in promoting human trust than explanations based on summary text descriptions. In addition, forms of explanation that are best suited to foster trust do not necessarily correspond to the model components contributing to the best task performance. This divergence shows a need for the robotics community to integrate model components to enhance both task execution and human trust in machines.

Citing Articles

Glove-Net: Enhancing Grasp Classification with Multisensory Data and Deep Learning Approach.

Pratap S, Narayan J, Hatta Y, Ito K, Hazarika S Sensors (Basel). 2024; 24(13).

PMID: 39001157 PMC: 11244365. DOI: 10.3390/s24134378.


Socially adaptive cognitive architecture for human-robot collaboration in industrial settings.

Freire I, Guerrero-Rosado O, Amil A, Verschure P Front Robot AI. 2024; 11:1248646.

PMID: 38915371 PMC: 11194424. DOI: 10.3389/frobt.2024.1248646.


Opinion attribution improves motivation to exchange subjective opinions with humanoid robots.

Uchida T, Minato T, Ishiguro H Front Robot AI. 2024; 11:1175879.

PMID: 38440774 PMC: 10909954. DOI: 10.3389/frobt.2024.1175879.


Leveraging explainability for understanding object descriptions in ambiguous 3D environments.

Dogan F, Melsion G, Leite I Front Robot AI. 2023; 9:937772.

PMID: 36704241 PMC: 9872646. DOI: 10.3389/frobt.2022.937772.


Transparent Interaction Based Learning for Human-Robot Collaboration.

Bagheri E, de Winter J, Vanderborght B Front Robot AI. 2022; 9:754955.

PMID: 35308459 PMC: 8930829. DOI: 10.3389/frobt.2022.754955.