Abstract
This chapter presents a multi-modal fusion framework for emotion
recognition using multiresolution analysis. The proposed framework consists of three
significant steps: (1) feature extraction and selection, (2) feature level fusion, and (3)
mapping of emotions in three-dimensional VAD space. The proposed framework
considers subject-independent features that can incorporate many more emotions. It is
possible to handle many channel features, especially synchronous EEG channels and
feature-level fusion works. This framework of representing emotions in 3D space can
be extended for mapping emotion in three-dimensional spaces with three specific
coordinates for a particular emotion. In addition to the fusion framework, we have
explained multiresolution approaches, such as wavelet and curvelet transform, to
classify and predict emotions.