Oculomotor Strategies for the Direction of Gaze Tested with a Real-world Activity
Overview
Affiliations
Laboratory-based models of oculomotor strategy that differ in the amount and type of top-down information were evaluated against a baseline case of random scanning for predicting the gaze patterns of subjects performing a real-world activity--walking to a target. Images of four subjects' eyes and field of view were simultaneously recorded as they performed the mobility task. Offline analyses generated movies of the eye on scene and a categorization scheme was used to classify the locations of the fixations. Frames from each subject's eye-on-scene movie served as input to the models, and the location of each model's predicted fixations was classified using the same categorization scheme. The results showed that models with no top-down information (visual salience model) or with only coarse feature information performed no better than a random scanner; the models' ordered fixation locations (gaze pattern) matched less than a quarter of the subjects' gaze patterns. A model that used only geographic information outperformed the random scanner and matched approximately a third of the gaze patterns. The best performance was obtained from an oculomotor strategy that used both coarse feature and geographic information, matching nearly half the gaze patterns (48%). Thus, a model that uses top-down information about a target's coarse features and general vicinity does a fairly good job predicting fixation behavior, but it does not fully specify the gaze pattern of a subject walking to a target. Additional information is required, perhaps in the form of finer feature information or knowledge of a task's procedure.
Default Reference Frames for Angular Expansion in the Perception of Visual Direction.
Tardeh P, Xu C, Durgin F Vision (Basel). 2024; 8(1).
PMID: 38535756 PMC: 10976159. DOI: 10.3390/vision8010007.
Franchak J, McGee B, Blanch G PLoS One. 2021; 16(8):e0256463.
PMID: 34415981 PMC: 8378697. DOI: 10.1371/journal.pone.0256463.
Meaning and Attentional Guidance in Scenes: A Review of the Meaning Map Approach.
Henderson J, Hayes T, Peacock C, Rehrig G Vision (Basel). 2019; 3(2).
PMID: 31735820 PMC: 6802777. DOI: 10.3390/vision3020019.
The role of meaning in attentional guidance during free viewing of real-world scenes.
Peacock C, Hayes T, Henderson J Acta Psychol (Amst). 2019; 198:102889.
PMID: 31302302 PMC: 6690792. DOI: 10.1016/j.actpsy.2019.102889.
Meaning guides attention in real-world scene images: Evidence from eye movements and meaning maps.
Henderson J, Hayes T J Vis. 2018; 18(6):10.
PMID: 30029216 PMC: 6012218. DOI: 10.1167/18.6.10.