Generic placeholder image

Recent Advances in Computer Science and Communications

Editor-in-Chief

ISSN (Print): 2666-2558
ISSN (Online): 2666-2566

Research Article

Maximizing Emotion Recognition Accuracy with Ensemble Techniques on EEG Signals

Author(s): Sonu Kumar Jha*, Somaraju Suvvari and Mukesh Kumar

Volume 17, Issue 5, 2024

Published on: 17 January, 2024

Article ID: e170124225749 Pages: 13

DOI: 10.2174/0126662558279390240105064917

Price: $65

Abstract

Background: Emotion is a strong feeling such as love, anger, fear, etc. Emotion can be recognized in two ways, i.e., External expression and Biomedical data-based. Nowadays, various research is occurring on emotion classification with biomedical data.

Aim: One of the most current studies in the medical sector, gaming-based applications, education sector, and many other domains is EEG-based emotion identification. The existing research on emotion recognition was published using models like KNN, RF Ensemble, SVM, CNN, and LSTM on biomedical EEG data. In general, only a few works have been published on ensemble or concatenation models for emotion recognition on EEG data and achieved better results than individual ones or a few machine learning approaches. Various papers have observed that CNN works better than other approaches for extracting features from the dataset, and LSTM works better on the sequence data.

Methods: Our research is based on emotion recognition using EEG data, a mixed-model deep learning methodology, and its comparison with a machine learning mixed-model methodology. In this study, we introduced a mixed model using CNN and LSTM that classifies emotions in valence and arousal on the DEAP dataset with 14 channels across 32 people.

Result and Discussion: We then compared it to SVM, KNN, and RF Ensemble, and concatenated these models with it. First preprocessed the raw data, then checked emotion classification using SVM, KNN, RF Ensemble, CNN, and LSTM individually. After that with the mixed model of CNN-LSTM, and SVM-KNN-RF Ensemble results are compared. Proposed model results have better accuracy as 80.70% in valence than individual ones with CNN, LSTM, SVM, KNN, RF Ensemble and concatenated models of SVM, KNN and RF Ensemble.

Conclusion: Overall, this paper concludes a powerful technique for processing a range of EEG data is the combination of CNNs and LSTMs. Ensemble approach results show better performance in the case of valence at 80.70% and 78.24% for arousal compared to previous research.

Graphical Abstract

[1]
T. Kirschstein, and R. Köhling, "What is the source of the EEG?", Clin. EEG Neurosci., vol. 40, no. 3, pp. 146-149, 2009.
[http://dx.doi.org/10.1177/155005940904000305] [PMID: 19715175]
[2]
P. Ekman, "Universals and cultural differences in facial expressions of emotion", In Nebraska Symposium on Motivation, 2019, pp. 207-283
[3]
S-H. Kim, N.A.T. Nguyen, H-J. Yang, and S-W. Lee, "eRAD-Fe: Emotion recognition-assisted deep learning framework", IEEE Trans. Instrum. Meas., vol. 40, pp. 1-12, 2021.
[4]
C. Chen, Z. Li, F. Wan, L. Xu, A. Bezerianos, and H. Wang, "Fusing frequency-domain features and brain connectivity features for cross-subject emotion recognition", IEEE Trans. Instrum. Meas., vol. 71, pp. 1-15, 2022.
[http://dx.doi.org/10.1109/TIM.2022.3168927]
[5]
N.A. Lassen, D.H. Ingvar, and E. Skinhøj, "Brain function and blood flow", Sci. Am., vol. 239, no. 4, pp. 62-71, 1978.
[http://dx.doi.org/10.1038/scientificamerican1078-62] [PMID: 705327]
[6]
G. Du, J. Su, L. Zhang, K. Su, X. Wang, S. Teng, and P.X. Liu, "A multi-dimensional graph convolution network for EEG emotion recognition", IEEE Trans. Instrum. Meas., vol. 71, pp. 1-11, 2022.
[http://dx.doi.org/10.1109/TIM.2022.3204314]
[7]
U. Herwig, P. Satrapi, and C. Schönfeldt-Lecuona, "Using the international 10-20 EEG system for positioning of transcranial magnetic stimulation", Brain Topogr., vol. 16, pp. 95-99, 2003.
[http://dx.doi.org/10.1023/B:BRAT.0000006333.93597.9d]
[8]
M. Abo-Zahhad, S.M. Ahmed, and S.N. Abbas, "A new EEG acquisition protocol for biometric identification using eye blinking signals", Int. J. Intell. Syst. Appl., vol. 7, no. 6, pp. 48-54, 2015.
[http://dx.doi.org/10.5815/ijisa.2015.06.05]
[9]
X. Du, "An efficient LSTM network for emotion recognition from multichannel EEG signals", IEEE Transac. Affect. Comput., vol. 13, no. 3, pp. 1528-1540, 2022.
[http://dx.doi.org/10.1109/TAFFC.2020.3013711]
[10]
S. Koelstra, "DEAP: A database for emotion analysis using physiological signals", IEEE Transac. Affect. Comput., vol. 3, no. 1, pp. 18-31, .
[11]
Mukesh Kumar, "A Study on visual secret sharing scheme using speech recognition", Technol. Manag. Rabind. Tagore Univ. J., vol. 9, pp. 2278-4187, 2019.
[12]
Md. Mustafizur Rahman, "EEG-based emotion analysis using non-linear features and ensemble learning approaches", Expert Syst. Appl., vol. 207, p. 118025, 2022.
[http://dx.doi.org/10.1016/j.eswa.2022.118025]
[13]
M. Sun, W. Cui, S. Yu, H. Han, B. Hu, and Y. Li, "A dual-branch dynamic graph convolution based adaptive transformer feature fusion network for EEG emotion recognition", IEEE Transac. Affect. Comput., vol. 13, no. 4, pp. 2218-2228, 2022.
[http://dx.doi.org/10.1109/TAFFC.2022.3199075]
[14]
T. Song, W. Zheng, P. Song, and Z. Cui, "EEG emotion recognition using dynamical graph convolutional neural networks", IEEE Transact. Affect. Comput., vol. 11, no. 3, pp. 532-541, 2020.
[http://dx.doi.org/10.1109/TAFFC.2018.2817622]
[15]
Q. Li, T. Zhang, C.L.P. Chen, K. Yi, and L. Chen, "Residual GCB-Net: Residual graph convolutional broad network on emotion recognition", IEEE Trans. Cogn. Dev. Syst., vol. 15, no. 4, pp. 1673-1685, 2023.
[http://dx.doi.org/10.1109/TCDS.2022.3147839]
[16]
S.K.J. Atul Chauhan, "Sharing image through visual secret sharing scheme using speech recognition method", IJAST, vol. 28, no. 16, pp. 303-307, 2019.
[17]
X. Li, D. Song, P. Zhang, G. Yu, Y. Hou, and B. Hu, "Emotion recognition from multi-channel EEG data through convolutional recurrent neural network", In 2016 IEEE International Conference on Bioinformatics and Biomedicine (BIBM), Shenzhen, China, 2016, pp. 352-359
[http://dx.doi.org/10.1109/BIBM.2016.7822545]
[18]
Xiang Li, and Yazhou Zhang, "EEG based emotion recognition: A tutorial and review", ACM Comput. Surv., vol. 55, no. 4, pp. 1-57, 2022.
[http://dx.doi.org/10.1145/3524499]
[19]
J. Atkinson, and D. Campos, "Improving BCI-based emotion recognition by combining EEG feature selection and kernel classifiers", Expert Syst. Appl., vol. 47, pp. 35-41, 2016.
[http://dx.doi.org/10.1016/j.eswa.2015.10.049]
[20]
T. Duan, M.A. Shaikh, M. Chauhan, J. Chu, R.K. Srihari, A. Pathak, and S.N. Srihari, "Meta learn on constrained transfer learning for low resource cross subject EEG classification", IEEE Access, vol. 8, pp. 224791-224802, 2020.
[http://dx.doi.org/10.1109/ACCESS.2020.3045225]
[21]
Z. Yin, "Locally robust EEG feature selection for individual-independent emotion recognition", Expert Syst. Appl., vol. 162, p. 113768, 2020.
[22]
A. Iyer, S.S. Das, R. Teotia, S. Maheshwari, and R. Sharma, "CNN and LSTM based ensemble learning for human emotion recognition using EEG recordings", Multimedia Tools Appl., 2022.
[http://dx.doi.org/10.1007/s11042-022-12310-7]
[23]
H. Candra, M. Yuwono, Rifai Chai, A. Handojoseno, I. Elamvazuthi, H.T. Nguyen, and S. Su, "“Investigation of window size in classification of EEG-emotion signal with wavelet entropy and support vector machine”", Annu. Int. Conf. IEEE Eng. Med. Biol. Soc., vol. 2015, pp. 7250-7253, 2015.
[http://dx.doi.org/10.1109/EMBC.2015.7320065] [PMID: 26737965]
[24]
X. Li, D. Song, P. Zhang, G. Yu, Y. Hou, and B. Hu, "Emotion recognition from multi-channel EEG data through Convolutional Recurrent", Neural Netw., pp. 352-359, 2016.
[http://dx.doi.org/10.1109/BIBM.2016.7822545]
[25]
S. Tripathi, S. Acharya, R. Sharma, S. Mittal, and S. Bhattacharya, "Using deep and convolutional neural networks for accurate emotion classification on DEAP data", Proc. AAAI Conf. Artifi. Intell., vol. 31, no. 2, pp. 4746-4752, 2017.
[http://dx.doi.org/10.1609/aaai.v31i2.19105]
[26]
M.A. Al-Shareeda, and S. Manickam, "COVID-19 vehicle based on an efficient mutual authentication scheme for 5g-enabled vehicular fog computing", Int. J. Environ. Res. Public Health, vol. 19, no. 23, p. 15618, 2022.
[http://dx.doi.org/10.3390/ijerph192315618] [PMID: 36497709]
[27]
M.A. Al-Shareeda, M. Anbar, S. Manickam, and I.H. Hasbullah, "SE-CPPA: A secure and efficient conditional privacy-preserving authentication scheme in vehicular Ad-Hoc networks", Sensors, vol. 21, no. 24, p. 8206, 2021.
[http://dx.doi.org/10.3390/s21248206] [PMID: 34960311]
[28]
M.A. Al-Shareeda, M. Anbar, S. Manickam, and I.H. Hasbullah, "Towards identity-based conditional privacy-preserving authentication scheme for vehicular Ad Hoc networks", IEEE Access, vol. 9, pp. 113226-113238, 2021.
[http://dx.doi.org/10.1109/ACCESS.2021.3104148]
[29]
B.A. Mohammed, M.A. Al-Shareeda, S. Manickam, Z.G. Al-Mekhlafi, A. Alreshidi, M. Alazmi, J.S. Alshudukhi, and M. Alsaffar, "FC-PA: Fog computing-based pseudonym authentication scheme in 5G-enabled vehicular networks", IEEE Access, vol. 11, pp. 18571-18581, 2023.
[http://dx.doi.org/10.1109/ACCESS.2023.3247222]
[30]
M.A. Al-Shareeda, and S. Manickam, "MSR-DoS: Modular square root-based scheme to resist Denial of Service (DoS) attacks in 5G-enabled vehicular networks", IEEE Access, vol. 10, pp. 120606-120615, 2022.
[http://dx.doi.org/10.1109/ACCESS.2022.3222488]

Rights & Permissions Print Cite
© 2024 Bentham Science Publishers | Privacy Policy