» Articles » PMID: 17905902

Enhanced Visual Speech Perception in Individuals with Early-onset Hearing Impairment

Overview
Date 2007 Oct 2
PMID 17905902
Citations 49
Authors
Affiliations
Soon will be listed here.
Abstract

Purpose: L. E. Bernstein, M. E. Demorest, and P. E. Tucker (2000) demonstrated enhanced speechreading accuracy in participants with early-onset hearing loss compared with hearing participants. Here, the authors test the generalization of Bernstein et al.'s (2000) result by testing 2 new large samples of participants. The authors also investigated correlates of speechreading ability within the early-onset hearing loss group and gender differences in speechreading ability within both participant groups.

Method: One hundred twelve individuals with early-onset hearing loss and 220 individuals with normal hearing identified 30 prerecorded sentences presented 1 at a time from visible speech information alone.

Results: The speechreading accuracy of the participants with early-onset hearing loss (M=43.55% words correct; SD=17.48) significantly exceeded that of the participants with normal hearing (M=18.57% words correct; SD=13.18), t(330)=14.576, p<.01. Within the early-onset hearing loss participants, speechreading ability was correlated with several subjective measures of spoken communication. Effects of gender were not reliably observed.

Conclusion: The present results are consistent with the results of Bernstein et al. (2000). The need to rely on visual speech throughout life, and particularly for the acquisition of spoken language by individuals with early-onset hearing loss, can lead to enhanced speechreading ability.

Citing Articles

Visual and Acoustic Aspects of Face Masks Affect Speech Intelligibility in Listeners with Different Hearing Statuses.

Rohner P, Sonnichsen R, Hochmuth S, Radeloff A Audiol Res. 2025; 15(1).

PMID: 39997151 PMC: 11851603. DOI: 10.3390/audiolres15010007.


The impact of visual information in speech perception for individuals with hearing loss: a mini review.

Choi A, Kim H, Jo M, Kim S, Joung H, Choi I Front Psychol. 2024; 15:1399084.

PMID: 39380752 PMC: 11458425. DOI: 10.3389/fpsyg.2024.1399084.


Synthetic faces generated with the facial action coding system or deep neural networks improve speech-in-noise perception, but not as much as real faces.

Yu Y, Lado A, Zhang Y, Magnotti J, Beauchamp M Front Neurosci. 2024; 18:1379988.

PMID: 38784097 PMC: 11111898. DOI: 10.3389/fnins.2024.1379988.


Children's use of spatial and visual cues for release from perceptual masking.

Lalonde K, Peng Z, Halverson D, Dwyer G J Acoust Soc Am. 2024; 155(2):1559-1569.

PMID: 38393738 PMC: 10890829. DOI: 10.1121/10.0024766.


Neurotopographical Transformations: Dissecting Cortical Reconfigurations in Auditory Deprivation.

Kumar U, Dhanik K, Pandey H, Mishra M, Keshri A J Neurosci. 2024; 44(13).

PMID: 38383498 PMC: 10977024. DOI: 10.1523/JNEUROSCI.1649-23.2024.