Guided Search 2.0 A Revised Model of Visual Search
Overview
Authors
Affiliations
An important component of routine visual behavior is the ability to find one item in a visual world filled with other, distracting items. This ability to performvisual search has been the subject of a large body of research in the past 15 years. This paper reviews the visual search literature and presents a model of human search behavior. Built upon the work of Neisser, Treisman, Julesz, and others, the model distinguishes between a preattentive, massively parallel stage that processes information about basic visual features (color, motion, various depth cues, etc.) across large portions of the visual field and a subsequent limited-capacity stage that performs other, more complex operations (e.g., face recognition, reading, object identification) over a limited portion of the visual field. The spatial deployment of the limited-capacity process is under attentional control. The heart of the guided search model is the idea that attentional deployment of limited resources isguided by the output of the earlier parallel processes. Guided Search 2.0 (GS2) is a revision of the model in which virtually all aspects of the model have been made more explicit and/or revised in light of new data. The paper is organized into four parts: Part 1 presents the model and the details of its computer simulation. Part 2 reviews the visual search literature on preattentive processing of basic features and shows how the GS2 simulation reproduces those results. Part 3 reviews the literature on the attentional deployment of limited-capacity processes in conjunction and serial searches and shows how the simulation handles those conditions. Finally, Part 4 deals with shortcomings of the model and unresolved issues.
Evaluating the contribution of parallel processing of color and shape in a conjunction search task.
Cui A, Buetti S, Xu Z, Lleras A Sci Rep. 2025; 15(1):7760.
PMID: 40044949 PMC: 11882880. DOI: 10.1038/s41598-025-92453-3.
Perceptual, Not Attentional, Guidance Drives Happy Superiority in Complex Visual Search.
Stuit S, Pardo Sanchez M, Terburg D Behav Sci (Basel). 2025; 15(2).
PMID: 40001755 PMC: 11851973. DOI: 10.3390/bs15020124.
Pre-AttentiveGaze: gaze-based authentication dataset with momentary visual interactions.
Jeon J, Noh Y, Kim J, Hong J Sci Data. 2025; 12(1):263.
PMID: 39948380 PMC: 11825865. DOI: 10.1038/s41597-025-04538-3.
Prediction of intrinsic and extraneous cognitive load with oculometric and biometric indicators.
Ekin M, Krejtz K, Duarte C, Duchowski A, Krejtz I Sci Rep. 2025; 15(1):5213.
PMID: 39939345 PMC: 11822071. DOI: 10.1038/s41598-025-89336-y.
Mengers V, Roth N, Brock O, Obermayer K, Rolfs M J Vis. 2025; 25(2):6.
PMID: 39928323 PMC: 11812614. DOI: 10.1167/jov.25.2.6.