» Articles » PMID: 15477031

Multisensory Contributions to the 3-D Representation of Visuotactile Peripersonal Space in Humans: Evidence from the Crossmodal Congruency Task

Overview
Journal J Physiol Paris
Specialty Physiology
Date 2004 Oct 13
PMID 15477031
Citations 64
Authors
Affiliations
Soon will be listed here.
Abstract

In order to determine precisely the location of a tactile stimulus presented to the hand it is necessary to know not only which part of the body has been stimulated, but also where that part of the body lies in space. This involves the multisensory integration of visual, tactile, proprioceptive, and even auditory cues regarding limb position. In recent years, researchers have become increasingly interested in the question of how these various sensory cues are weighted and integrated in order to enable people to localize tactile stimuli, as well as to give rise to the 'felt' position of our limbs, and ultimately the multisensory representation of 3-D peripersonal space. We highlight recent research on this topic using the crossmodal congruency task, in which participants make speeded elevation discrimination responses to vibrotactile targets presented to the thumb or index finger, while simultaneously trying to ignore irrelevant visual distractors presented from either the same (i.e., congruent) or a different (i.e., incongruent) elevation. Crossmodal congruency effects (calculated as performance on incongruent-congruent trials) are greatest when visual and vibrotactile stimuli are presented from the same azimuthal location, thus providing an index of common position across different sensory modalities. The crossmodal congruency task has been used to investigate a number of questions related to the representation of space in both normal participants and brain-damaged patients. In this review, we detail the major findings from this research, and highlight areas of convergence with other cognitive neuroscience disciplines.

Citing Articles

Optimality of multisensory integration while compensating for uncertain visual target information with artificial vibrotactile cues during reach planning.

Amann L, Casasnovas V, Hainke J, Gail A J Neuroeng Rehabil. 2024; 21(1):155.

PMID: 39252006 PMC: 11382450. DOI: 10.1186/s12984-024-01448-0.


Sense of Agency and Skills Learning in Virtual-Mediated Environment: A Systematic Review.

Cesari V, DAversa S, Piarulli A, Melfi F, Gemignani A, Menicucci D Brain Sci. 2024; 14(4).

PMID: 38672002 PMC: 11048251. DOI: 10.3390/brainsci14040350.


Action does not drive visual biases in peri-tool space.

McManus R, Thomas L Atten Percept Psychophys. 2023; 86(2):525-535.

PMID: 38127254 DOI: 10.3758/s13414-023-02826-x.


The left-right reversed visual feedback of the hand affects multisensory interaction within peripersonal space.

Mine D, Narumi T Atten Percept Psychophys. 2023; 86(1):285-294.

PMID: 37759149 PMC: 10769940. DOI: 10.3758/s13414-023-02788-0.


Beyond peripersonal boundaries: insights from crossmodal interactions.

Finotti G, Menicagli D, Migliorati D, Costantini M, Ferri F Cogn Process. 2023; 25(1):121-132.

PMID: 37656270 PMC: 10827818. DOI: 10.1007/s10339-023-01154-0.