High-Frequency Sensorineural Hearing Loss Alters Cue-Weighting Strategies for Discriminating Stop Consonants in Noise
Overview
Affiliations
There is increasing evidence that hearing-impaired (HI) individuals do not use the same listening strategies as normal-hearing (NH) individuals, even when wearing optimally fitted hearing aids. In this perspective, better characterization of individual perceptual strategies is an important step toward designing more effective speech-processing algorithms. Here, we describe two complementary approaches for (a) revealing the acoustic cues used by a participant in a /d/-/g/ categorization task in noise and (b) measuring the relative contributions of these cues to decision. These two approaches involve natural speech recordings altered by the addition of a “bump noise.” The bumps were narrowband bursts of noise localized on the spectrotemporal locations of the acoustic cues, allowing the experimenter to manipulate the consonant percept. The cue-weighting strategies were estimated for three groups of participants: 17 NH listeners, 18 HI listeners with high-frequency loss, and 15 HI listeners with flat loss. HI participants were provided with individual frequency-dependent amplification to compensate for their hearing loss. Although all listeners relied more heavily on the high-frequency cue than on the low-frequency cue, an important variability was observed in the individual weights, mostly explained by differences in internal noise. Individuals with high-frequency loss relied slightly less heavily on the high-frequency cue relative to the low-frequency cue, compared with NH individuals, suggesting a possible influence of supra-threshold deficits on cue-weighting strategies. Altogether, these results suggest a need for individually tailored speech-in-noise processing in hearing aids, if more effective speech discriminability in noise is to be achieved.
Mapping the spectrotemporal regions influencing perception of French stop consonants in noise.
Carranante G, Cany C, Farri P, Giavazzi M, Varnet L Sci Rep. 2024; 14(1):27183.
PMID: 39516258 PMC: 11549473. DOI: 10.1038/s41598-024-77634-w.
Spectral weighting for sentence recognition in steady-state and amplitude-modulated noise.
Shen Y, Langley L JASA Express Lett. 2023; 3(5).
PMID: 37125871 PMC: 10155216. DOI: 10.1121/10.0017934.
Dichotic spectral integration range for consonant recognition in listeners with normal hearing.
Yoon Y, Morgan D Front Psychol. 2022; 13:1009463.
PMID: 36337493 PMC: 9633255. DOI: 10.3389/fpsyg.2022.1009463.
Diao T, Ma X, Zhang J, Duan M, Yu L Front Neurosci. 2021; 15:750874.
PMID: 34867162 PMC: 8634596. DOI: 10.3389/fnins.2021.750874.
Roverud E, Dubno J, Richards V, Kidd Jr G J Acoust Soc Am. 2021; 150(4):2327.
PMID: 34717459 PMC: 8637742. DOI: 10.1121/10.0006450.