Multimodal Affective Computing: Affective Information Representation, Modelling, and Analysis

Multimodal Fusion Framework and Multiresolution Analysis

Author(s): Gyanendra K. Verma * .

Pp: 59-74 (16)

DOI: 10.2174/9789815124453123010009

* (Excluding Mailing and Handling)

Abstract

This chapter presents a multi-modal fusion framework for emotion recognition using multiresolution analysis. The proposed framework consists of three significant steps: (1) feature extraction and selection, (2) feature level fusion, and (3) mapping of emotions in three-dimensional VAD space. The proposed framework considers subject-independent features that can incorporate many more emotions. It is possible to handle many channel features, especially synchronous EEG channels and feature-level fusion works. This framework of representing emotions in 3D space can be extended for mapping emotion in three-dimensional spaces with three specific coordinates for a particular emotion. In addition to the fusion framework, we have explained multiresolution approaches, such as wavelet and curvelet transform, to classify and predict emotions.

© 2024 Bentham Science Publishers | Privacy Policy