» Articles » PMID: 28282400

Developing a Benchmark for Emotional Analysis of Music

Overview
Journal PLoS One
Date 2017 Mar 11
PMID 28282400
Citations 16
Authors
Affiliations
Soon will be listed here.
Abstract

Music emotion recognition (MER) field rapidly expanded in the last decade. Many new methods and new audio features are developed to improve the performance of MER algorithms. However, it is very difficult to compare the performance of the new methods because of the data representation diversity and scarcity of publicly available data. In this paper, we address these problems by creating a data set and a benchmark for MER. The data set that we release, a MediaEval Database for Emotional Analysis in Music (DEAM), is the largest available data set of dynamic annotations (valence and arousal annotations for 1,802 songs and song excerpts licensed under Creative Commons with 2Hz time resolution). Using DEAM, we organized the 'Emotion in Music' task at MediaEval Multimedia Evaluation Campaign from 2013 to 2015. The benchmark attracted, in total, 21 active teams to participate in the challenge. We analyze the results of the benchmark: the winning algorithms and feature-sets. We also describe the design of the benchmark, the evaluation procedures and the data cleaning and transformations that we suggest. The results from the benchmark suggest that the recurrent neural network based approaches combined with large feature-sets work best for dynamic MER.

Citing Articles

What emotions does music express? Structure of affect terms in music using iterative crowdsourcing paradigm.

Eerola T, Saari P PLoS One. 2025; 20(1):e0313502.

PMID: 39841646 PMC: 11753638. DOI: 10.1371/journal.pone.0313502.


Music-induced emotion flow modeling by ENMI Network.

Shang Y, Peng Q, Wu Z, Liu Y PLoS One. 2024; 19(10):e0297712.

PMID: 39432493 PMC: 11493256. DOI: 10.1371/journal.pone.0297712.


People make mistakes: Obtaining accurate ground truth from continuous annotations of subjective constructs.

Booth B, Narayanan S Behav Res Methods. 2024; 56(8):8784-8800.

PMID: 39349847 PMC: 11525321. DOI: 10.3758/s13428-024-02503-3.


A review of artificial intelligence methods enabled music-evoked EEG emotion recognition and their applications.

Su Y, Liu Y, Xiao Y, Ma J, Li D Front Neurosci. 2024; 18:1400444.

PMID: 39296709 PMC: 11408483. DOI: 10.3389/fnins.2024.1400444.


Accelerated construction of stress relief music datasets using CNN and the Mel-scaled spectrogram.

Choi S, Park J, Hong C, Park S, Park S PLoS One. 2024; 19(5):e0300607.

PMID: 38787824 PMC: 11125514. DOI: 10.1371/journal.pone.0300607.


References
1.
Schubert E . Update of the Hevner adjective checklist. Percept Mot Skills. 2003; 96(3 Pt 2):1117-22. DOI: 10.2466/pms.2003.96.3c.1117. View

2.
Korhonen M, Clausi D, Jernigan M . Modeling emotional content of music using system identification. IEEE Trans Syst Man Cybern B Cybern. 2006; 36(3):588-99. DOI: 10.1109/tsmcb.2005.862491. View

3.
McKeown G, Sneddon I . Modeling continuous self-report measures of perceived emotion using generalized additive mixed models. Psychol Methods. 2013; 19(1):155-74. DOI: 10.1037/a0034282. View

4.
Lin L . A concordance correlation coefficient to evaluate reproducibility. Biometrics. 1989; 45(1):255-68. View