Abstract
This chapter presents a multi-modal fusion framework for emotion
recognition using multiresolution analysis. The proposed framework consists of three
significant steps: (1) feature extraction and selection, (2) feature level fusion, and (3)
mapping of emotions in three-dimensional VAD space. The proposed framework
considers subject-independent features that can incorporate many more emotions. It is
possible to handle many channel features, especially synchronous EEG channels and
feature-level fusion works. This framework of representing emotions in 3D space can
be extended for mapping emotion in three-dimensional spaces with three specific
coordinates for a particular emotion. In addition to the fusion framework, we have
explained multiresolution approaches, such as wavelet and curvelet transform, to
classify and predict emotions.
About this chapter
Cite this chapter as:
Gyanendra K. Verma ;Multimodal Fusion Framework and Multiresolution Analysis, Multimodal Affective Computing: Affective Information Representation, Modelling, and Analysis (2023) 1: 59. https://doi.org/10.2174/9789815124453123010009
DOI https://doi.org/10.2174/9789815124453123010009 |
Publisher Name Bentham Science Publisher |