Abstract
Background: Emotion is a strong feeling such as love, anger, fear, etc. Emotion can be recognized in two ways, i.e., External expression and Biomedical data-based. Nowadays, various research is occurring on emotion classification with biomedical data.
Aim: One of the most current studies in the medical sector, gaming-based applications, education sector, and many other domains is EEG-based emotion identification. The existing research on emotion recognition was published using models like KNN, RF Ensemble, SVM, CNN, and LSTM on biomedical EEG data. In general, only a few works have been published on ensemble or concatenation models for emotion recognition on EEG data and achieved better results than individual ones or a few machine learning approaches. Various papers have observed that CNN works better than other approaches for extracting features from the dataset, and LSTM works better on the sequence data.
Methods: Our research is based on emotion recognition using EEG data, a mixed-model deep learning methodology, and its comparison with a machine learning mixed-model methodology. In this study, we introduced a mixed model using CNN and LSTM that classifies emotions in valence and arousal on the DEAP dataset with 14 channels across 32 people.
Result and Discussion: We then compared it to SVM, KNN, and RF Ensemble, and concatenated these models with it. First preprocessed the raw data, then checked emotion classification using SVM, KNN, RF Ensemble, CNN, and LSTM individually. After that with the mixed model of CNN-LSTM, and SVM-KNN-RF Ensemble results are compared. Proposed model results have better accuracy as 80.70% in valence than individual ones with CNN, LSTM, SVM, KNN, RF Ensemble and concatenated models of SVM, KNN and RF Ensemble.
Conclusion: Overall, this paper concludes a powerful technique for processing a range of EEG data is the combination of CNNs and LSTMs. Ensemble approach results show better performance in the case of valence at 80.70% and 78.24% for arousal compared to previous research.
Graphical Abstract
[http://dx.doi.org/10.1177/155005940904000305] [PMID: 19715175]
[http://dx.doi.org/10.1109/TIM.2022.3168927]
[http://dx.doi.org/10.1038/scientificamerican1078-62] [PMID: 705327]
[http://dx.doi.org/10.1109/TIM.2022.3204314]
[http://dx.doi.org/10.1023/B:BRAT.0000006333.93597.9d]
[http://dx.doi.org/10.5815/ijisa.2015.06.05]
[http://dx.doi.org/10.1109/TAFFC.2020.3013711]
[http://dx.doi.org/10.1016/j.eswa.2022.118025]
[http://dx.doi.org/10.1109/TAFFC.2022.3199075]
[http://dx.doi.org/10.1109/TAFFC.2018.2817622]
[http://dx.doi.org/10.1109/TCDS.2022.3147839]
[http://dx.doi.org/10.1109/BIBM.2016.7822545]
[http://dx.doi.org/10.1145/3524499]
[http://dx.doi.org/10.1016/j.eswa.2015.10.049]
[http://dx.doi.org/10.1109/ACCESS.2020.3045225]
[http://dx.doi.org/10.1007/s11042-022-12310-7]
[http://dx.doi.org/10.1109/EMBC.2015.7320065] [PMID: 26737965]
[http://dx.doi.org/10.1109/BIBM.2016.7822545]
[http://dx.doi.org/10.1609/aaai.v31i2.19105]
[http://dx.doi.org/10.3390/ijerph192315618] [PMID: 36497709]
[http://dx.doi.org/10.3390/s21248206] [PMID: 34960311]
[http://dx.doi.org/10.1109/ACCESS.2021.3104148]
[http://dx.doi.org/10.1109/ACCESS.2023.3247222]
[http://dx.doi.org/10.1109/ACCESS.2022.3222488]