» Articles » PMID: 37414861

Target of Selective Auditory Attention Can Be Robustly Followed with MEG

Overview
Journal Sci Rep
Specialty Science
Date 2023 Jul 6
PMID 37414861
Authors
Affiliations
Soon will be listed here.
Abstract

Selective auditory attention enables filtering of relevant acoustic information from irrelevant. Specific auditory responses, measurable by magneto- and electroencephalography (MEG/EEG), are known to be modulated by attention to the evoking stimuli. However, such attention effects have typically been studied in unnatural conditions (e.g. during dichotic listening of pure tones) and have been demonstrated mostly in averaged auditory evoked responses. To test how reliably we can detect the attention target from unaveraged brain responses, we recorded MEG data from 15 healthy subjects that were presented with two human speakers uttering continuously the words "Yes" and "No" in an interleaved manner. The subjects were asked to attend to one speaker. To investigate which temporal and spatial aspects of the responses carry the most information about the target of auditory attention, we performed spatially and temporally resolved classification of the unaveraged MEG responses using a support vector machine. Sensor-level decoding of the responses to attended vs. unattended words resulted in a mean accuracy of [Formula: see text] (N = 14) for both stimulus words. The discriminating information was mostly available 200-400 ms after the stimulus onset. Spatially-resolved source-level decoding indicated that the most informative sources were in the auditory cortices, in both the left and right hemisphere. Our result corroborates attention modulation of auditory evoked responses and shows that such modulations are detectable in unaveraged MEG responses at high accuracy, which could be exploited e.g. in an intuitive brain-computer interface.

References
1.
Mirkovic B, Debener S, Jaeger M, De Vos M . Decoding the attended speech stream with multi-channel EEG: implications for online, daily-life applications. J Neural Eng. 2015; 12(4):046007. DOI: 10.1088/1741-2560/12/4/046007. View

2.
Furdea A, Halder S, Krusienski D, Bross D, Nijboer F, Birbaumer N . An auditory oddball (P300) spelling system for brain-computer interfaces. Psychophysiology. 2009; 46(3):617-25. DOI: 10.1111/j.1469-8986.2008.00783.x. View

3.
Fuglsang S, Dau T, Hjortkjaer J . Noise-robust cortical tracking of attended speech in real-world acoustic scenes. Neuroimage. 2017; 156:435-444. DOI: 10.1016/j.neuroimage.2017.04.026. View

4.
Combrisson E, Jerbi K . Exceeding chance level by chance: The caveat of theoretical chance levels in brain signal classification and statistical assessment of decoding accuracy. J Neurosci Methods. 2015; 250:126-36. DOI: 10.1016/j.jneumeth.2015.01.010. View

5.
Klobassa D, Vaughan T, Brunner P, Schwartz N, Wolpaw J, Neuper C . Toward a high-throughput auditory P300-based brain-computer interface. Clin Neurophysiol. 2009; 120(7):1252-61. PMC: 2729552. DOI: 10.1016/j.clinph.2009.04.019. View