» Articles » PMID: 36393982

Do We Parse the Background into Separate Streams in the Cocktail Party?

Overview
Specialty Neurology
Date 2022 Nov 17
PMID 36393982
Authors
Affiliations
Soon will be listed here.
Abstract

In the cocktail party situation, people with normal hearing usually follow a single speaker among multiple concurrent ones. However, there is no agreement in the literature as to whether the background is segregated into multiple streams/speakers. The current study varied the number of concurrent speech streams and investigated target detection and memory for the contents of a target stream as well as the processing of distractors. A male-voiced target stream was either presented alone (single-speech), together with one male-voiced distractor (one-distractor), or a male- and a female-voiced distractor (two-distractor). Behavioral measures of target detection and content tracking performance as well as target- and distractor detection related event-related brain potentials (ERPs) were assessed. We found that the N2 amplitude decreased whereas the P3 amplitude increased from the single-speech to the concurrent speech streams conditions. Importantly, the behavioral effect of distractors differed between the conditions with one vs. two distractor speech streams and the non-zero voltages in the N2 time window for distractor numerals and in the P3 time window for syntactic violations appearing in the non-target speech stream significantly differed between the one- and two-distractor conditions for the same (male) speaker. These results support the notion that the two background speech streams are segregated, as they show that distractors and syntactic violations appearing in the non-target streams are processed even when two speech non-target speech streams are delivered together with the target stream.

Citing Articles

The effects of spatial leakage correction on the reliability of EEG-based functional connectivity networks.

Nagy P, Toth B, Winkler I, Boncz A Hum Brain Mapp. 2024; 45(8):e26747.

PMID: 38825981 PMC: 11144954. DOI: 10.1002/hbm.26747.

References
1.
Rahne T, Bockmann M, von Specht H, Sussman E . Visual cues can modulate integration and segregation of objects in auditory scene analysis. Brain Res. 2007; 1144:127-35. PMC: 1885229. DOI: 10.1016/j.brainres.2007.01.074. View

2.
Szalardy O, Toth B, Farkas D, Kovacs A, Urban G, Orosz G . The effects of attention and task-relevance on the processing of syntactic violations during listening to two concurrent speech streams. Cogn Affect Behav Neurosci. 2018; 18(5):932-948. DOI: 10.3758/s13415-018-0614-4. View

3.
Astheimer L, Sanders L . Listeners modulate temporally selective attention during natural speech processing. Biol Psychol. 2008; 80(1):23-34. PMC: 2652874. DOI: 10.1016/j.biopsycho.2008.01.015. View

4.
Brungart D, Simpson B, Ericson M, Scott K . Informational and energetic masking effects in the perception of multiple simultaneous talkers. J Acoust Soc Am. 2002; 110(5 Pt 1):2527-38. DOI: 10.1121/1.1408946. View

5.
Sussman E, Bregman A, Wang W, Khan F . Attentional modulation of electrophysiological activity in auditory cortex for unattended sounds within multistream auditory environments. Cogn Affect Behav Neurosci. 2005; 5(1):93-110. DOI: 10.3758/cabn.5.1.93. View