» Articles » PMID: 34164631

Predicting Goal-directed Attention Control Using Inverse-Reinforcement Learning

Overview
Specialty Psychology
Date 2021 Jun 24
PMID 34164631
Citations 5
Authors
Affiliations
Soon will be listed here.
Abstract

Understanding how goals control behavior is a question ripe for interrogation by new methods from machine learning. These methods require large and labeled datasets to train models. To annotate a large-scale image dataset with observed search fixations, we collected 16,184 fixations from people searching for either microwaves or clocks in a dataset of 4,366 images (MS-COCO). We then used this behaviorally-annotated dataset and the machine learning method of inverse-reinforcement learning (IRL) to learn target-specific reward functions and policies for these two target goals. Finally, we used these learned policies to predict the fixations of 60 new behavioral searchers (clock = 30, microwave = 30) in a disjoint test dataset of kitchen scenes depicting both a microwave and a clock (thus controlling for differences in low-level image contrast). We found that the IRL model predicted behavioral search efficiency and fixation-density maps using multiple metrics. Moreover, reward maps from the IRL model revealed target-specific patterns that suggest, not just attention guidance by target features, but also guidance by scene context (e.g., fixations along walls in the search of clocks). Using machine learning and the psychologically meaningful principle of reward, it is possible to learn the visual features used in goal-directed attention control.

Citing Articles

Searching for meaning: Local scene semantics guide attention during natural visual search in scenes.

Peacock C, Singh P, Hayes T, Rehrig G, Henderson J Q J Exp Psychol (Hove). 2022; 76(3):632-648.

PMID: 35510885 PMC: 11132926. DOI: 10.1177/17470218221101334.


Weighting the factors affecting attention guidance during free viewing and visual search: The unexpected role of object recognition uncertainty.

Chakraborty S, Samaras D, Zelinsky G J Vis. 2022; 22(4):13.

PMID: 35323870 PMC: 8963662. DOI: 10.1167/jov.22.4.13.


Meaning and expected surfaces combine to guide attention during visual search in scenes.

Peacock C, Cronin D, Hayes T, Henderson J J Vis. 2021; 21(11):1.

PMID: 34609475 PMC: 8496418. DOI: 10.1167/jov.21.11.1.


Domain Adaptation for Imitation Learning Using Generative Adversarial Network.

Nguyen Duc T, Tran C, Tan P, Kamioka E Sensors (Basel). 2021; 21(14).

PMID: 34300456 PMC: 8309483. DOI: 10.3390/s21144718.


Attention in Psychology, Neuroscience, and Machine Learning.

Lindsay G Front Comput Neurosci. 2020; 14:29.

PMID: 32372937 PMC: 7177153. DOI: 10.3389/fncom.2020.00029.

References
1.
Bylinskii Z, Judd T, Oliva A, Torralba A, Durand F . What Do Different Evaluation Metrics Tell Us About Saliency Models?. IEEE Trans Pattern Anal Mach Intell. 2018; 41(3):740-757. DOI: 10.1109/TPAMI.2018.2815601. View

2.
Schmidt J, Zelinsky G . Search guidance is proportional to the categorical specificity of a target cue. Q J Exp Psychol (Hove). 2009; 62(10):1904-14. DOI: 10.1080/17470210902853530. View

3.
Yang Z, Huang L, Chen Y, Wei Z, Ahn S, Zelinsky G . Predicting Goal-directed Human Attention Using Inverse Reinforcement Learning. Proc IEEE Comput Soc Conf Comput Vis Pattern Recognit. 2021; 2020:190-199. PMC: 8218821. DOI: 10.1109/cvpr42600.2020.00027. View

4.
Anderson B . A value-driven mechanism of attentional selection. J Vis. 2013; 13(3). PMC: 3630531. DOI: 10.1167/13.3.7. View

5.
Ehinger K, Hidalgo-Sotelo B, Torralba A, Oliva A . Modeling Search for People in 900 Scenes: A combined source model of eye guidance. Vis cogn. 2009; 17(6-7):945-978. PMC: 2790194. DOI: 10.1080/13506280902834720. View