» Articles » PMID: 39792818

Effects of Spectral Manipulations of Music Mixes on Musical Scene Analysis Abilities of Hearing-impaired Listeners

Overview
Journal PLoS One
Date 2025 Jan 10
PMID 39792818
Authors
Affiliations
Soon will be listed here.
Abstract

Music pre-processing methods are currently becoming a recognized area of research with the goal of making music more accessible to listeners with a hearing impairment. Our previous study showed that hearing-impaired listeners preferred spectrally manipulated multi-track mixes. Nevertheless, the acoustical basis of mixing for hearing-impaired listeners remains poorly understood. Here, we assess listeners' ability to detect a musical target within mixes with varying degrees of spectral manipulations using the so-called EQ-transform. This transform exaggerates or downplays the spectral distinctiveness of a track with respect to an ensemble average spectrum taken over a number of instruments. In an experiment, 30 young normal-hearing (yNH) and 24 older hearing-impaired (oHI) participants with predominantly moderate to severe hearing loss were tested. The target that was to be detected in the mixes was from the instrument categories Lead vocals, Bass guitar, Drums, Guitar, and Piano. Our results show that both hearing loss and target category affected performance, but there were no main effects of EQ-transform. yNH performed consistently better than oHI in all target categories, irrespective of the spectral manipulations. Both groups demonstrated the best performance in detecting Lead vocals, with yNH performing flawlessly at 100% median accuracy and oHI at 92.5% (IQR = 86.3-96.3%). Contrarily, performance in detecting Bass was arguably the worst among yNH (Mdn = 67.5% IQR = 60-75%) and oHI (Mdn = 60%, IQR = 50-66.3%), with the latter even performing close to chance-levels of 50% accuracy. Predictions from a generalized linear mixed-effects model indicated that for every decibel increase in hearing loss level, the odds of correctly detecting the target decreased by 3%. Therefore, baseline performance progressively declined to chance-level at moderately severe degrees of hearing loss thresholds, independent of target category. The frequency domain sparsity of mixes and larger differences in target and mix roll-off points were positively correlated with performance especially for oHI participants (r = .3, p < .01). Performance of yNH on the other hand remained robust to changes in mix sparsity. Our findings underscore the multifaceted nature of selective listening in musical scenes and the instrument-specific consequences of spectral adjustments of the audio.

References
1.
Madsen S, Marschall M, Dau T, Oxenham A . Speech perception is similar for musicians and non-musicians across a wide range of conditions. Sci Rep. 2019; 9(1):10404. PMC: 6639310. DOI: 10.1038/s41598-019-46728-1. View

2.
Chen J, Chuang A, McMahon C, Hsieh J, Tung T, Li L . Music training improves pitch perception in prelingually deafened children with cochlear implants. Pediatrics. 2010; 125(4):e793-800. DOI: 10.1542/peds.2008-3620. View

3.
Mullensiefen D, Gingras B, Musil J, Stewart L . The musicality of non-musicians: an index for assessing musical sophistication in the general population. PLoS One. 2014; 9(2):e89642. PMC: 3935919. DOI: 10.1371/journal.pone.0089642. View

4.
Clark J . Uses and abuses of hearing loss classification. ASHA. 1981; 23(7):493-500. View

5.
Larrouy-Maestri P, Harrison P, Mullensiefen D . The mistuning perception test: A new measurement instrument. Behav Res Methods. 2019; 51(2):663-675. PMC: 6478636. DOI: 10.3758/s13428-019-01225-1. View