» Articles » PMID: 39321152

Perceiving Depth Beyond Sight: Evaluating Intrinsic and Learned Cues Via a Proof of Concept Sensory Substitution Method in the Visually Impaired and Sighted

Overview
Journal PLoS One
Date 2024 Sep 25
PMID 39321152
Authors
Affiliations
Soon will be listed here.
Abstract

This study explores spatial perception of depth by employing a novel proof of concept sensory substitution algorithm. The algorithm taps into existing cognitive scaffolds such as language and cross modal correspondences by naming objects in the scene while representing their elevation and depth by manipulation of the auditory properties for each axis. While the representation of verticality utilized a previously tested correspondence with pitch, the representation of depth employed an ecologically inspired manipulation, based on the loss of gain and filtration of higher frequency sounds over distance. The study, involving 40 participants, seven of which were blind (5) or visually impaired (2), investigates the intrinsicness of an ecologically inspired mapping of auditory cues for depth by comparing it to an interchanged condition where the mappings of the two axes are swapped. All participants successfully learned to use the algorithm following a very brief period of training, with the blind and visually impaired participants showing similar levels of success for learning to use the algorithm as did their sighted counterparts. A significant difference was found at baseline between the two conditions, indicating the intuitiveness of the original ecologically inspired mapping. Despite this, participants were able to achieve similar success rates following the training in both conditions. The findings indicate that both intrinsic and learned cues come into play with respect to depth perception. Moreover, they suggest that by employing perceptual learning, novel sensory mappings can be trained in adulthood. Regarding the blind and visually impaired, the results also support the convergence view, which claims that with training, their spatial abilities can converge with those of the sighted. Finally, we discuss how the algorithm can open new avenues for accessibility technologies, virtual reality, and other practical applications.

References
1.
Pinardi M, Di Stefano N, Pino G, Spence C . Exploring crossmodal correspondences for future research in human movement augmentation. Front Psychol. 2023; 14:1190103. PMC: 10308310. DOI: 10.3389/fpsyg.2023.1190103. View

2.
Renier L, Collignon O, Poirier C, Tranduy D, Vanlierde A, Bol A . Cross-modal activation of visual cortex during depth perception using auditory substitution of vision. Neuroimage. 2005; 26(2):573-80. DOI: 10.1016/j.neuroimage.2005.01.047. View

3.
Mitchell D, Maurer D . Critical Periods in Vision Revisited. Annu Rev Vis Sci. 2022; 8:291-321. DOI: 10.1146/annurev-vision-090721-110411. View

4.
Striem-Amit E, Cohen L, Dehaene S, Amedi A . Reading with sounds: sensory substitution selectively activates the visual word form area in the blind. Neuron. 2012; 76(3):640-52. DOI: 10.1016/j.neuron.2012.08.026. View

5.
Schinazi V, Thrash T, Chebat D . Spatial navigation by congenitally blind individuals. Wiley Interdiscip Rev Cogn Sci. 2015; 7(1):37-58. PMC: 4737291. DOI: 10.1002/wcs.1375. View