» Articles » PMID: 33679348

MindLink-Eumpy: An Open-Source Python Toolbox for Multimodal Emotion Recognition

Overview
Specialty Neurology
Date 2021 Mar 8
PMID 33679348
Citations 7
Authors
Affiliations
Soon will be listed here.
Abstract

Emotion recognition plays an important role in intelligent human-computer interaction, but the related research still faces the problems of low accuracy and subject dependence. In this paper, an open-source software toolbox called MindLink-Eumpy is developed to recognize emotions by integrating electroencephalogram (EEG) and facial expression information. MindLink-Eumpy first applies a series of tools to automatically obtain physiological data from subjects and then analyzes the obtained facial expression data and EEG data, respectively, and finally fuses the two different signals at a decision level. In the detection of facial expressions, the algorithm used by MindLink-Eumpy is a multitask convolutional neural network (CNN) based on transfer learning technique. In the detection of EEG, MindLink-Eumpy provides two algorithms, including a subject-dependent model based on support vector machine (SVM) and a subject-independent model based on long short-term memory network (LSTM). In the decision-level fusion, weight enumerator and AdaBoost technique are applied to combine the predictions of SVM and CNN. We conducted two offline experiments on the Database for Emotion Analysis Using Physiological Signals (DEAP) dataset and the Multimodal Database for Affect Recognition and Implicit Tagging (MAHNOB-HCI) dataset, respectively, and conducted an online experiment on 15 healthy subjects. The results show that multimodal methods outperform single-modal methods in both offline and online experiments. In the subject-dependent condition, the multimodal method achieved an accuracy of 71.00% in the valence dimension and an accuracy of 72.14% in the arousal dimension. In the subject-independent condition, the LSTM-based method achieved an accuracy of 78.56% in the valence dimension and an accuracy of 77.22% in the arousal dimension. The feasibility and efficiency of MindLink-Eumpy for emotion recognition is thus demonstrated.

Citing Articles

Multimodal Emotion Recognition Based on Facial Expressions, Speech, and EEG.

Pan J, Fang W, Zhang Z, Chen B, Zhang Z, Wang S IEEE Open J Eng Med Biol. 2024; 5:396-403.

PMID: 38899017 PMC: 11186647. DOI: 10.1109/OJEMB.2023.3240280.


Emotion recognition based on microstate analysis from temporal and spatial patterns of electroencephalogram.

Wei Z, Li H, Ma L, Li H Front Neurosci. 2024; 18:1355512.

PMID: 38550568 PMC: 10972890. DOI: 10.3389/fnins.2024.1355512.


An Emotion Recognition Embedded System using a Lightweight Deep Learning Model.

Bazargani M, Tahmasebi A, Yazdchi M, Baharlouei Z J Med Signals Sens. 2023; 13(4):272-279.

PMID: 37809016 PMC: 10559299. DOI: 10.4103/jmss.jmss_59_22.


A Bimodal Emotion Recognition Approach through the Fusion of Electroencephalography and Facial Sequences.

Muhammad F, Hussain M, Aboalsamh H Diagnostics (Basel). 2023; 13(5).

PMID: 36900121 PMC: 10000366. DOI: 10.3390/diagnostics13050977.


A novel EEG decoding method for a facial-expression-based BCI system using the combined convolutional neural network and genetic algorithm.

Li R, Liu D, Li Z, Liu J, Zhou J, Liu W Front Neurosci. 2022; 16:988535.

PMID: 36177358 PMC: 9513431. DOI: 10.3389/fnins.2022.988535.


References
1.
Aydin S, Kaya T, Guler H . Wavelet-based study of valence-arousal model of emotions on EEG signals with LabVIEW. Brain Inform. 2016; 3(2):109-117. PMC: 4883169. DOI: 10.1007/s40708-016-0031-9. View

2.
Gu H, Fan R, Zhao J, Chen Y, Chen Q, Li X . Inhibitory control of emotional interference in children with learning disorders: Evidence from event-related potentials and event-related spectral perturbation analysis. Brain Res. 2019; 1718:252-258. DOI: 10.1016/j.brainres.2019.04.016. View

3.
Hochreiter S, Schmidhuber J . Long short-term memory. Neural Comput. 1997; 9(8):1735-80. DOI: 10.1162/neco.1997.9.8.1735. View

4.
Roidl E, Frehse B, Hoger R . Emotional states of drivers and the impact on speed, acceleration and traffic violations - a simulator study. Accid Anal Prev. 2014; 70:282-92. DOI: 10.1016/j.aap.2014.04.010. View

5.
Duan K, Rajapakse J, Wang H, Azuaje F . Multiple SVM-RFE for gene selection in cancer classification with expression data. IEEE Trans Nanobioscience. 2005; 4(3):228-34. DOI: 10.1109/tnb.2005.853657. View