Preface
Page: ii-iii (2)
Author: Gyanendra K. Verma*
DOI: 10.2174/9789815124453123010002
PDF Price: $15
Acknowledgements
Page: iv-iv (1)
Author: Gyanendra K. Verma*
DOI: 10.2174/9789815124453123010003
PDF Price: $15
Affective Computing
Page: 1-12 (12)
Author: Gyanendra K. Verma*
DOI: 10.2174/9789815124453123010004
PDF Price: $15
Abstract
With the invention of high-power computing systems, machines are expected to show intelligence at par with human beings. A machine must be able to analyze and interpret emotions to demonstrate intelligent behavior. Affective computing not only helps computers to improve performance intelligently but also helps in decision-making. This chapter introduces affective computing and related issues that influence emotions. This study also provides an overview of humancomputer interaction (HCI) and the possible use of different modalities for HCI. Further, challenges in affective computing are also discussed, along with the application of affective computing in various areas.
Affective Information Representation
Page: 13-29 (17)
Author: Gyanendra K. Verma*
DOI: 10.2174/9789815124453123010005
PDF Price: $15
Abstract
This chapter presents a brief overview of Affective computing and a formal
definition of emotion given by various researchers. Human-computer interaction aims
to enhance communication between man and machine so that machines can acquire,
analyze, interpret and act on par with human beings. At the same time, Affective
human-computer interaction focuses on enhancing communication between man and
machines using affective information. Moreover, this chapter deals with Human
emotional expression and perception through various modalities such as speech, facial
expressions, physiological signals, etc. It also detailed the overview of Action Units
and Techniques for classifying facial expressions as reported in the literature.
Models and Theory of Emotion
Page: 30-39 (10)
Author: Gyanendra K. Verma*
DOI: 10.2174/9789815124453123010006
PDF Price: $15
Abstract
This chapter presents a state-of-the-art review of existing emotion theory,
modeling approaches, and affective information extraction and processing methods.
The basic theory of emotions deals with Darwin's evolutionary theory, Schechter's
theory of emotion, and James–Lange's theory. These theories are fundamental building
blocks of Affective Computing research. Emotion modeling approaches can be
categorized into categorical, appraisal, and dimensional models. Noticeable
contributions to Affect recognition systems in terms of modality, database, and
dimensionality are also discussed in this chapter.
Affective Information Extraction, Processing and Evaluation
Page: 40-48 (9)
Author: Gyanendra K. Verma*
DOI: 10.2174/9789815124453123010007
PDF Price: $15
Abstract
This chapter presents a state-of-the-art review of existing affective
information extraction and processing approaches. Various evaluation criteria, such as
Evaluation matrices like ROC, F1 measure, Mean Square Error, Mean Average Error,
Threshold criteria, and Performance criteria are also reported in this chapter.
Multimodal Affective Information Fusion
Page: 49-58 (10)
Author: Gyanendra K. Verma*
DOI: 10.2174/9789815124453123010008
PDF Price: $15
Abstract
The multimodal information can be assimilated at three levels 1) early
fusion, 2) intermediate fusion, and 3) late fusion. Early fusion can be performed at the
sensor or signal level. Intermediate fusion can be at the feature level, and late fusion
may be done at the decision level. Apart from that, some more fusion techniques are
rank-based, adaptive, etc. This chapter provides an extensive review of studies based
on fusion and reported noticeable work herewith. Eventually, we discussed the
challenges associated with multimodal fusion.
Multimodal Fusion Framework and Multiresolution Analysis
Page: 59-74 (16)
Author: Gyanendra K. Verma*
DOI: 10.2174/9789815124453123010009
PDF Price: $15
Abstract
This chapter presents a multi-modal fusion framework for emotion
recognition using multiresolution analysis. The proposed framework consists of three
significant steps: (1) feature extraction and selection, (2) feature level fusion, and (3)
mapping of emotions in three-dimensional VAD space. The proposed framework
considers subject-independent features that can incorporate many more emotions. It is
possible to handle many channel features, especially synchronous EEG channels and
feature-level fusion works. This framework of representing emotions in 3D space can
be extended for mapping emotion in three-dimensional spaces with three specific
coordinates for a particular emotion. In addition to the fusion framework, we have
explained multiresolution approaches, such as wavelet and curvelet transform, to
classify and predict emotions.
Emotion Recognition From Facial Expression In A Noisy Environment
Page: 75-96 (22)
Author: Gyanendra K. Verma*
DOI: 10.2174/9789815124453123010010
PDF Price: $15
Abstract
This study presents emotion recognition from facial expressions in a noisy
environment. The challenges addressed in this study are noise in the images and
illumination changes. Wavelets have been extensively used for noise reduction;
therefore, we have applied wavelet and curvelet analysis from noisy images. The
experiments are performed with different values of Gaussian noise (mean: 0.01, 0.03)
and (variance: 0.01, 0.03). Similarly, for experimentation with illumination changes,
we have considered different dynamic ranges (0.1, 0.9). Three benchmark databases,
Cohn-Kanade, JAFFE, and In-house, are used for all experimentation. The five best
machine learning algorithms are used for classification purposes. Experimental results
show that SVM and MLP classifiers with wavelet and curvelet-based coefficients yield
better results for emotion recognition. We can conclude that Wavelet coefficients-based
features perform well, especially in the presence of Gaussian noise and illumination
changes for facial expression recognition.
Spontaneous Emotion Recognition From Audio-Visual Signals
Page: 97-114 (18)
Author: Gyanendra K. Verma*
DOI: 10.2174/9789815124453123010011
PDF Price: $15
Abstract
This chapter introduces an emotion recognition system based on audio and
video cues. For audio-based emotion recognition, we have explored various aspects of
feature extraction and classification strategy and found that wavelet analysis is sound.
We have shown comparative results for discriminating capabilities of various
combinations of features using the Fisher Discriminant Analysis (FDA). Finally, we
have combined the audio and video features using a feature-level fusion approach. All
the experiments are performed with eNTERFACE and RML databases. Though we
have applied multiple classifiers, SVM shows significantly improved performance with
a single modality and fusion. The results obtained using fusion outperformed in
contrast results based on a single modality of audio or video. We can conclude that
fusion approaches are best as it is using complementary information from multiple
modalities.
Multimodal Fusion Framework: Emotion Recognition From Physiological Signals
Page: 115-127 (13)
Author: Gyanendra K. Verma*
DOI: 10.2174/9789815124453123010012
PDF Price: $15
Abstract
This study presents a multimodal fusion framework for emotion recognition
from physiological signals. In contrast to emotion recognition through facial
expression, a large number of emotions can be recognized accurately through
physiological signals. The DEAP database, a benchmark multimodal database with
many collections of EEG and peripheral signals, is employed for experimentation. The
proposed method takes into account those features that are subject-independent and can
incorporate many more emotions. As it is possible to handle many channel features,
especially synchronous EEG channels, feature-level fusion is applied in this study. The
features extracted from EEG and peripheral signals include relative, logarithmic, and
absolute power energy of Alpha, Beta, Gamma, Delta, and Theta. Experimental results
demonstrate that physiological signals' Theta and Beta bands are the most significant
contributor to the performance. On the other hand, SVM performs outstandingly.
Emotions Modelling in 3D Space
Page: 128-147 (20)
Author: Gyanendra K. Verma*
DOI: 10.2174/9789815124453123010013
PDF Price: $15
Abstract
In this study, we have discussed emotion representation in two and threedimensional space. The three-dimensional space is based on the three emotion primitives, i.e., valence, arousal, and dominance. The multimodal cues used in this study are EEG, Physiological signals, and video (under limitations). Due to the limited emotional content in videos from the DEAP database, we have considered only three classes of emotions, i.e., happy, sad, and terrible. The wavelet transforms, a classical transform, were employed for multi-resolution analysis of signals to extract features. We have evaluated the proposed emotion model with standard multimodal datasets, DEAP. The experimental results show that SVM and MLP can predict emotions in single and multimodal cues.
Subject Index
Page: 148-153 (6)
Author: Gyanendra K. Verma*
DOI: 10.2174/9789815124453123010014
PDF Price: $15
Introduction
Affective computing is an emerging field situated at the intersection of artificial intelligence and behavioral science. Affective computing refers to studying and developing systems that recognize, interpret, process, and simulate human emotions. It has recently seen significant advances from exploratory studies to real-world applications. Multimodal Affective Computing offers readers a concise overview of the state-of-the-art and emerging themes in affective computing, including a comprehensive review of the existing approaches in applied affective computing systems and social signal processing. It covers affective facial expression and recognition, affective body expression and recognition, affective speech processing, affective text, and dialogue processing, recognizing affect using physiological measures, computational models of emotion and theoretical foundations, and affective sound and music processing. This book identifies future directions for the field and summarizes a set of guidelines for developing next-generation affective computing systems that are effective, safe, and human-centered.The book is an informative resource for academicians, professionals, researchers, and students at engineering and medical institutions working in the areas of applied affective computing, sentiment analysis, and emotion recognition.