» Articles » PMID: 32973610

Musicians Show Improved Speech Segregation in Competitive, Multi-Talker Cocktail Party Scenarios

Overview
Journal Front Psychol
Date 2020 Sep 25
PMID 32973610
Citations 19
Authors
Affiliations
Soon will be listed here.
Abstract

Studies suggest that long-term music experience enhances the brain's ability to segregate speech from noise. Musicians' "speech-in-noise (SIN) benefit" is based largely on perception from simple figure-ground tasks rather than competitive, multi-talker scenarios that offer realistic spatial cues for segregation and engage binaural processing. We aimed to investigate whether musicians show perceptual advantages in cocktail party speech segregation in a competitive, multi-talker environment. We used the coordinate response measure (CRM) paradigm to measure speech recognition and localization performance in musicians vs. non-musicians in a simulated 3D cocktail party environment conducted in an anechoic chamber. Speech was delivered through a 16-channel speaker array distributed around the horizontal soundfield surrounding the listener. Participants recalled the color, number, and perceived location of target callsign sentences. We manipulated task difficulty by varying the number of additional maskers presented at other spatial locations in the horizontal soundfield (0-1-2-3-4-6-8 multi-talkers). Musicians obtained faster and better speech recognition amidst up to around eight simultaneous talkers and showed less noise-related decline in performance with increasing interferers than their non-musician peers. Correlations revealed associations between listeners' years of musical training and CRM recognition and working memory. However, better working memory correlated with better speech streaming. Basic (QuickSIN) but not more complex (speech streaming) SIN processing was still predicted by music training after controlling for working memory. Our findings confirm a relationship between musicianship and naturalistic cocktail party speech streaming but also suggest that cognitive factors at least partially drive musicians' SIN advantage.

Citing Articles

Hearing in categories and speech perception at the "cocktail party".

Bidelman G, Bernard F, Skubic K PLoS One. 2025; 20(1):e0318600.

PMID: 39883695 PMC: 11781644. DOI: 10.1371/journal.pone.0318600.


Covert variations of a musician's loudness during collective improvisation capture other musicians' attention and impact their interactions.

Schwarz A, Faraco A, Vincent C, Susini P, Ponsot E, Canonne C Proc Biol Sci. 2025; 292(2039):20242623.

PMID: 39837509 PMC: 11750383. DOI: 10.1098/rspb.2024.2623.


Musicianship Modulates Cortical Effects of Attention on Processing Musical Triads.

MacLean J, Drobny E, Rizzi R, Bidelman G Brain Sci. 2024; 14(11).

PMID: 39595842 PMC: 11592084. DOI: 10.3390/brainsci14111079.


Functional benefits of continuous vs. categorical listening strategies on the neural encoding and perception of noise-degraded speech.

Rizzi R, Bidelman G Brain Res. 2024; 1844:149166.

PMID: 39151718 PMC: 11399885. DOI: 10.1016/j.brainres.2024.149166.


Myogenic artifacts masquerade as neuroplasticity in the auditory frequency-following response.

Bidelman G, Sisson A, Rizzi R, MacLean J, Baer K Front Neurosci. 2024; 18:1422903.

PMID: 39040631 PMC: 11260751. DOI: 10.3389/fnins.2024.1422903.


References
1.
Bidelman G, Schneider A, Heitzmann V, Bhagat S . Musicianship enhances ipsilateral and contralateral efferent gain control to the cochlea. Hear Res. 2016; 344:275-283. DOI: 10.1016/j.heares.2016.12.001. View

2.
Ruggles D, Freyman R, Oxenham A . Influence of musical training on understanding voiced and whispered speech in noise. PLoS One. 2014; 9(1):e86980. PMC: 3904968. DOI: 10.1371/journal.pone.0086980. View

3.
Sares A, Foster N, Allen K, Hyde K . Pitch and Time Processing in Speech and Tones: The Effects of Musical Training and Attention. J Speech Lang Hear Res. 2018; 61(3):496-509. DOI: 10.1044/2017_JSLHR-S-17-0207. View

4.
Mankel K, Bidelman G . Inherent auditory skills rather than formal music training shape the neural encoding of speech. Proc Natl Acad Sci U S A. 2018; 115(51):13129-13134. PMC: 6304957. DOI: 10.1073/pnas.1811793115. View

5.
Shamma S, Elhilali M, Micheyl C . Temporal coherence and attention in auditory scene analysis. Trends Neurosci. 2011; 34(3):114-23. PMC: 3073558. DOI: 10.1016/j.tins.2010.11.002. View