About the Editors'
Page: i-i (1)
Author: Juan Manuel Górriz, Elmar W. Lang and Javier Ramírez
DOI: 10.2174/97816080521891110101000i
Preface
Page: iii-iii (1)
Author: Juan Manuel Górriz, Elmar W. Lang and Javier Ramírez
DOI: 10.2174/978160805218911101010iii
Contributors
Page: iv-xiii (10)
Author: Juan Manuel Górriz, Elmar W. Lang and Javier Ramírez
DOI: 10.2174/9781608052189111010100iv
Decomposition Techniques In Neuroscience
Page: 1-25 (25)
Author: M. De Vos, Lieven De Lathauwer and S. Van Huffel
DOI: 10.2174/978160805218911101010001
PDF Price: $30
Abstract
Cognitive sciences try to understand the working of the brain. As the most important recording techniques (Electroencephalogram and functional Magnetic Resonance Imaging) only measure a combination of different active brain sources, Blind Source Separation (BSS) techniques have a wide range of applications in this field. BSS aims at extracting individual brain processes. The current state of the art of BSS in neuroscience will be discussed. This chapter aims at clarifying why different problems are solved in different ways that reflect different assumptions. We will explain some algorithms in detail. We will focus on Canonical Correlation Analysis (CCA), Independent Component Analysis (ICA) and CPA (Canonical/Parallel Factor Analysis). It will be shown how these algorithms can be applied to enhance brain research. We will highlight the potential and limitations of current BSS techniques.
Exploratory Matrix Factorization Techniques for Large Scale Biomedical Data Sets
Page: 26-47 (22)
Author: E.W. Lang, R. Schachtner, D. Lutter, D. Herold, A. Kodewitz, F. Blochl, F. J. Theis, I. R. Keck, J.M Gorriz Saezd, P. Gomez, P. Gomez Vildae and A. M. Tomec
DOI: 10.2174/978160805218911101010026
PDF Price: $30
Abstract
Exploratory matrix factorization (EMF) techniques applied to two-way or multi-way biomedical data arrays provide new and efficient analysis tools which are currently explored to analyze large scale data sets like gene expression profiles (GEP) measured on microarrays, lipidomic or metabolomic profiles acquired by mass spectrometry (MS) and/or high performance liquid chromatography (HPLC) as well as biomedical images acquired with functional imaging techniques like functional magnetic resonance imaging (fMRI) or positron emission tomography (PET). Exploratory feature extraction techniques like, for example, Principal Component Analysis (PCA), Independent Component Analysis (ICA) or sparse Nonnegative Matrix Factorization (NMF) yield uncorrelated, statistically independent or sparsely encoded and strictly non-negative features which in case of GEPs are called eigenarrays (PCA), expression modes (ICA) or metagenes (NMF). They represent features which characterize the data sets under study and are generally considered indicative of underlying regulatory processes or functional networks and also serve as discriminative features for classification purposes. In the latter case, EMF techniques, when combined with diagnostic a priori knowledge, can directly be applied to the classification of biomedical data sets by grouping samples into different categories for diagnostic purposes or group genes, lipids, metabolic species or activity patches into functional categories for further investigation of related metabolic pathways and regulatory or functional networks. Although these techniques can be applied to large scale data sets in general, the following discussion will primarily focus on applications to microarray data sets and PET images.
Subspace Techniques and Biomedical Time Series Analysis
Page: 48-59 (12)
Author: A. M. Tome, A. R. Teixeira and E. W. Lang
DOI: 10.2174/978160805218911101010048
PDF Price: $30
Abstract
The application of subspace techniques to univariate (single-sensor) biomedical time series is presented. Both linear and non-linear methods are described using algebraic models, and the dot product is the most important operation concerning data manipulations. The covariance/ correlationmatrices, computed in the space of time-delayed coordinates or in a feature space created by a non-linear mapping, are employed to deduce orthogonal models. Linear methods encompass singular spectrum analysis (SSA), singular value decomposition (SVD) or principal component analysis (PCA). Local SSA is a variant of SSA which can approximate non-linear trajectories of the embedded signal by introducing a clustering step. Generically non-linear methods encompass kernel principal component analysis (KPCA) and greedy KPCA. The latter is a variant where the subspace model is based on a selected subset of data only
Empirical Mode Decomposition Techniques for Biomedical Time Series Analysis
Page: 60-81 (22)
Author: A. Zeiler, R. Faltermeier, M. Bohm, I. R. Keck, A.M. Tome, C. G. Puntonet, A. Brawanski and E.W. Lang
DOI: 10.2174/978160805218911101010060
PDF Price: $30
Abstract
Biomedical signals often represent non-stationary and non-linearly coupled time series resulting from a non-linear superposition of underlying modes which are indicative of the current state of the biomedical system monitored. Their non-linear coupling and non-stationary nature exacerbates their interpretation and presumes profound expert knowledge and experience. Recently, an empirical nonlinear analysis method for complex, non-stationary time series has been pioneered by N. E. Huang. It is commonly referred to as Empirical Mode Decomposition (EMD) and adaptively and locally decomposes such time series into a sum of oscillatory modes, called Intrinsic Mode Functions (IMF). Their Hilbert-Huang transform provides exact time-frequency spectra and their related instantaneous amplitudes and energies. Thereby new and important insights can be gained as each relevant mode can be extracted in isolation. This provides new insights into their interdependencies and allows to identify typical signatures when the latter start to behave abnormally. Classical time series analysis methods fail to provide such insights as they are not prepared to deal with non-stationary and non-linearly coupled signals. The contribution reviews the technique of EMD and related algorithms and shortly discusses recent applications to biomedical problems.
A Comparison between Univariate and Multivariate Supervised Learning for Classification of SPECT Images
Page: 82-94 (13)
Author: F. Segovia, J. M. Gorriz, J. Ramirez, D. Salas-Gonzalez, I. A. Illan, M. Lopez, R. Chaves and P. Padilla
DOI: 10.2174/978160805218911101010082
PDF Price: $30
Abstract
Some approaches for a computer aided diagnosis (CAD) system for the analysis and classification of SPECT images can be found in literature. Two families of algorithms are used for this purpose. One the one hand monovariate methodologies based on statistical parametric mapping (SPM) are widely used due the great results that the random field theory has obtained. SPM consists of doing a voxelwise statistical test, in most cases a t-test, which allows comparing somehow the image under study with other images used as model. On the other hand multivariate approaches such as MANCOVA, consider as one observation all the voxels in a single image to make inferences about distributed activation effects. Thus, these methods increase sensibility but they suffer the so-called small sample size problem. In this sense, recent multivariate CAD system include sophisticated methodologies to reduce the input space, and thus they get to reduce the small sample size problem. This chapter shows a brief overview of the SPMoperation and explain some ways of using it for classification tasks. Moreover, a GMM-based method is described in order to exemplify multivariate approaches. Both methodologies are employed for building a CAD system for Alzheimer’s disease (AD). In spite of the large differences between the two systems, both of them achieve similar effectiveness rates.
Functional Brain Image Preprocessing For Computer Aided Diagnosis Systems
Page: 95-106 (12)
Author: R. Chaves, D. Salas-Gonzalez, J. Ramirez, J. M. Gorriz, M. Lopez, I. Alvarez and F. Segoviaa
DOI: 10.2174/978160805218911101010095
PDF Price: $30
Abstract
In this chapter, classical filtered backprojection and statistical maximum likelihood expectation maximization image reconstruction algorithms are evaluated in terms of image quality and processing delay. Image files were taken from a concurrent study investigating the use of SPECT as a diagnostic tool for the early onset of Alzheimer-type dementia. Filtered backprojection (FBP) image reconstruction needs a careful control of the noise since it tends to amplify high frequency noise. Pre- and post-filtering improves the quality of FBP reconstruction by removing the huge high frequency noise present in Single Photon Emission Computed Tomography (SPECT) data and the residual noise after reconstruction. Maximum likelihood expectation maximization (MLEM) yields better image quality when compared to FBP since a precise statistical model of the emission is used. However, the processing delay is considerable due to its slow convergence. On the other hand, the ordered subsets expectation maximization (OS-EM) method is also explained. OS-EM is found to be a good trade-off between image quality and processing delay since it converges in a single iteration by partitioning the set off detection elements into about 15- 20 subsets. Furthermore, in this chapter, the performance of five different nonlinear least-square optimization algorithms is compared in the context of the affine registration of SPECT images. The Levenberg-Marquardt algorithm is shown to be very robust but the convergence rate is considerably lower than for Gauss-Newton algorithms. Two existing Gauss-Newton procedures are compared to two GN algorithms which include an additional parameter. This parameter allows to adaptively change the descent direction and it improves the performance upon most used brain registration algorithms in the literature.
Functional Image Classification Techniques For Early Alzheimer’s Disease Detection
Page: 107-122 (16)
Author: I. Alvarez-Illan, Miriam M. Lopez, J. M. Gorriz, J. Ramirez, F. Segovia, D. Salas-Gonzalez, R. Chaves and C. G. Puntonet
DOI: 10.2174/978160805218911101010107
PDF Price: $30
Abstract
Conventional evaluation of functional image scans often relies on manual reorientation, visual reading and semiquantitative analysis of certain regions of the brain. These steps are time consuming, subjective and prone to error. In this chapter, several techniques for feature extraction and classification methods are presented as an automatic alternative to explore the images with the aim of detecting the Alzheimer’s Disease (AD) in its early stage. The huge number of voxels of a typical brain scan makes necessary to use data reduction and compression techniques as well as other feature extraction methods that allow to hold the discriminant information in lower dimensional feature vectors, solving that way the well-known small sample size problem. The extracted features can be subsequently combined with different classification techniques to define a complete Computer Aided Diagnosis (CAD) system capable to distinguish successfully between normal controls and AD affected subjects.
Time-Frequency Analysis of MEG activity in Alzheimer’s Disease
Page: 123-140 (18)
Author: J. Poza and R. Hornero
DOI: 10.2174/978160805218911101010123
PDF Price: $30
Abstract
Alzheimer's Disease (AD) is the most common form of dementia in western countries. Clinical detection is based on a differential diagnosis, whereas a definite confirmation can only be made by examination of brain tissue. Given that AD is a cortical degenerative dementia affecting the cerebral cortex, it is reasonable to think that the analysis of electroencephalographic (EEG) and magnetoencephalographic (MEG) recordings could reflect functional and structural deficits. Although EEG background activity in AD has been extensively analysed, its clinical diagnostic value is limited. On the other hand, only a few studies have focused on MEG disease patterns. For these reasons, the spontaneous MEG activity from 20 patients with a diagnosis of probable AD and 21 controls was analysed. Several parameters were calculated from the power spectrum of the Fourier transform, short-time Fourier transform and wavelet transform, in order to obtain a comprehensive description of the spectral time-varying properties. In this sense, the relative power calculated in conventional EEG frequency bands showed a significant global slowing of the oscillatory MEG activity in AD. Likewise, an overall loss of irregularity in the MEG brain rhythms of AD patients was found with the Shannon, Tsallis and Renyi entropies. Furthermore, the classification accuracy obtained by the classical spectral methods increased when the time-frequency representations were applied. The results, in terms of statistical differences and ability to discriminate between groups, suggest the potential utility of these parameters to describe the cognitive and functional abnormalities of dementia. They can yield complementary information useful in clinical diagnosis and provide further insights on neurophysiological processes associated with AD.
Machine Learning Approach for Myotonic Dystrophy Diagnostic Support from MRI
Page: 141-148 (8)
Author: Alexandre Savio, Maite Garcia-Sebastian, Andone Sistiaga, Darya Chyzhyk, Esther Fernandez, Fermin Moreno, Elsa Fernandez, Manuel Grana, Jorge Villanua and Adolfo Lopez de Munain
DOI: 10.2174/978160805218911101010141
PDF Price: $30
Abstract
In this paper we report the application of a Machine Learning approach to research support in Myotonic Dystrophy (MD) from structural Magnetic Resonance Imaging (sMRI). The approach consists of a feature extraction process based on the results of Voxel Based Morphometry (VBM) analysis of sMRI obtained from a set of patient and control subjects, followed by a classification step performed by Support VectorMachine (SVM) classifiers trained on the features extracted from the data set.
Finding Gold in The Dirt - Biomedical Artifacts in The Light of ICA
Page: 149-156 (8)
Author: I. R. Keck, V. Fischer, C. G. Puntonet, A. M.Tome and E. W. Lang
DOI: 10.2174/978160805218911101010149
PDF Price: $30
Abstract
Artifacts are a common problemin signal processing. They usually consist of strong signals that have to be removed before the data analysis. In this chapter we show how by applying Independent Component Analysis artifact signals can be extracted easily from biomedical signals. Focussing on movement artifacts in functional MRI data we demonstrate that artifact signals may contain themselves important information about the experiments from which the data results. This is the case for the eye movement signal that can be extracted directly from the functionalMRI data. We describe the way to extract this signal with state of the art ICA algorithms and show how it can be used to quantify eye movement in a fMRI experiment instead using a dedicated eye tracker. Finally we present the FMREyetrack SPM Plugin that allows the user to automatically extract the eye movement information from fMRI data sets.
Are We to Integrate Previous Information into Microarray Analyses? Interpretation of a Lmx1b-Knockout Experiment
Page: 157-170 (14)
Author: Florian Blochl, Anne Rascle, Jurgen Kastner, Ralph Witzgall, Elmar W. Lang and Fabian J. Theis
DOI: 10.2174/978160805218911101010157
PDF Price: $30
Abstract
A general question in the analysis of biological experiments is how to maximize statistical information present in the data while at the same time keeping bias at a minimal level. This can be reformulated as the question whether to perform differential analysis or only explorative screens. In this contribution we discuss this old paradigm in the context of a differential microarray experiment. The transcription factor Lmx1b is knocked out in a mouse model in order to gain further insight into gene regulation taking place in Nail-patella syndrome, a disease caused by mutations of this gene. We review several statistical methods and contrast them with supervised learning on the two differential modes and unsupervised, explorative analysis. Moreover we propose a novel method for analyzing single clusters by projecting them back on specific experiments. Our reference is the identification of three well-known targets. We find that by integrating all results we are able to confirm these target genes. Furthermore, hypotheses on further potential target genes are formulated.
Mixed Effects Models for Single-Trial ERP Detection in Noninvasive Brain Computer Interface Design
Page: 171-180 (10)
Author: Yonghong Huang, Deniz Erdogmus, Kenneth Hild II, Misha Pavel and Santosh Mathan
DOI: 10.2174/978160805218911101010171
PDF Price: $30
Abstract
Single-trial evoked response potential detection is a fundamental problem that needs to be solved with high accuracy before noninvasive brain computer interfaces (BCI) can become a widely used practical tool that enables seamless communication with and control of a computer and any peripheral devices that can be connected to it. While in current BCI prototypes multi-trial inference is utilized with some success to convey user’s intent to the computer for various applications, speed of such communication is inherently limited by the number of stimulus repetitions the subject has to go through before one command selection can be transmitted to the computer. Consequently, number of stimulus repetitions (i.e., number of trials) is inversely proportional to the speed of communication and control that the subject can achieve. In this chapter, we provide a review of our recent work on using mixed effects models, a parametric modeling approach to statistically model trial-responses in electroencephalography in a generative fashion. Emerging from this generative model, we also develop a Fisher kernel that is in turn utilized in the support vector machine framework to develop a discriminative model for single-trial evoked response potential detection. Our results demonstrate that across multiple subjects and multiple sessions, the Fisher kernel detector outperforms its likelihood ratio test counterpart based on the generative model as well as other benchmark classifiers, specifically support vector machines with linear and Gaussian kernels.
Learning Sparse Similarity Functions for Heart Wall Motion Abnormality Detection
Page: 181-190 (10)
Author: Glenn Fung
DOI: 10.2174/978160805218911101010181
PDF Price: $30
Abstract
Coronary heart disease (CHD) is a global epidemic that is the leading cause of death worldwide. CHD can be detected by measuring and scoring the regional and global motion of the left ventricle (LV) of the heart. This works describes a novel automatic technique which can detect the regional wall motion abnormalities of the LV from echocardiograms. Given a sequence of endocardial contours extracted from LV ultrasound images, the sequence of contours moving through time can be interpreted as a three-dimensional (3D) surface. From the 3D surfaces, we compute several geometry-based features (shape-index values, curvedness, surface normals, etc.) to obtain histograms-based similarity functions that are optimally combined using a mathematical programming approach to learn a kernel function designed to classify normal vs. abnormal heart wall motion. In contrast with other state-of-the-art methods, our formulation also generates sparse kernels. Kernel sparsity is directly related to the computational cost of the kernel evaluation, which is an important factor when designing classifiers that are part of a real time system. Experimental results on a set of echocardiograms collected in routine clinical practice at one hospital demonstrate the potential of the proposed approach
Using Spatial Diversity in the Estimation of Atrial Fibrillatory Activity from the Electrocardiogram
Page: 191-215 (25)
Author: R. Phlypo, P. Bonizzi, O. Meste and V. Zarzoso
DOI: 10.2174/978160805218911101010191
PDF Price: $30
Abstract
Atrial fibrillation is the most prevalent sustained cardiac arrhythmia encountered by clinicians. This trouble is characterized by a persisting uncoordinated atrial electrical activation, which results in a inefficient atrial mechanical function and long-term risks of stroke. Despite its incidence and risks of serious complications, the electrophysiologicalmechanisms causing AF are not yet well understood. In clinical practice, AF is mainly diagnosed on the surface electrocardiogram (ECG), where the atrial and ventricular activities appear as a linear superposition of potential fields. This renders the evaluation of the atrial activity a complex task, and calls for the design of suitable signal processing techniques for atrial activity estimation in the ECG. This chapter presents some recent advances in the estimation of atrial activity exploiting the spatial diversity of the standard ECG. We first stress the relation between the electrophysiological activity of the heart and the potential field measured by the cutaneous electrodes, including the standard linear approximation of the bioelectrical field produced by a physiological source. This connection naturally leads to the notion of source topographies and spatial filtering. Given the surface electrode recordings, spatial filters aim to isolate the potential fields of the distinct sources by means of suitable linear combination of the lead outputs. We focus on how the estimation of spatial filters is typically handled in a blind or semi-blind context. This forms the basis for the presentation of recently proposed algorithms in the estimation of atrial activity. While benefiting from prior information about the atrial activity, the proposed algorithms keep their generality so that they can be used in a wide range of electrocardiogram recordings.
Ultrasound Image Analysis. Methods and Applications
Page: 216-230 (15)
Author: J. Marti, A. Gubern-Merida, J. Massich, A. Oliver, J. C. Vilanova, J. Comet, E. Perez, M. Arzoz and R. Marti
DOI: 10.2174/978160805218911101010216
PDF Price: $30
Abstract
Research on medical image registration and segmentation are currently one of the most active areas in the particular field of medical image analysis and computer vision and pattern recognition in general. This chapter describes the main difficulties, objectives and methodology to approach three different challenges related to medical ultrasound image analysis: freehand image reconstruction, breast mass detection and multi-modality image registration. We propose novel approaches to solve those particular problems and present evaluation in terms of quantitative and qualitative analysis using our own image dataset.
Reconstruction and Analysis of Intravascular Ultrasound Sequences
Page: 231-250 (20)
Author: Francesco Ciompi, Carlo Gatta, Oriol Pujol, Oriol Rodriguez-Leor, Josepa Mauri Ferre and Petia Radeva
DOI: 10.2174/978160805218911101010231
PDF Price: $30
Abstract
Atherosclerotic plaque has been identified as one of the most important causes of sudden cardiac failure in patients with no history of heart disease. IntraVascular UltraSound (IVUS) represents a unique technique to study, determine and quantify plaque composition and thus allows to develop automatic diagnostic and prediction techniques for coronary diagnosis and therapy. However, one of the main problems of image-based studies is its dependence on image brightness and data miss-registration due to the dynamic system composed by the catheter and the vessel. Hence, the high dependence of the automatic analysis on the gain setting of IVUS console and its transmit power as well as vessel motion make impossible direct analysis, comparison and follow up of IVUS studies. To this purpose, a complete framework for data analysis should be considered focusing on: a) modeling the image acquisition and formation process, b) developing techniques for removing data acquisition artifacts due to the nature of ultrasound reflectance and motion of coronary vessels, c) developing sophisticated tools for extracting features from radio-frequency and images, and d) designing robust methods to discover and classify different categories of tissue structures. In this chapter, we overview different methodologies to approach the afore-mentioned problems and outline possible computer-assisted applications in the clinical practice.
Human Body Position Monitoring
Page: 251-268 (18)
Author: Alberto Olivares, J. M. Gorriz, J. Ramirez and Gonzalo Olivares
DOI: 10.2174/978160805218911101010251
PDF Price: $30
Abstract
Human body motion capture is of great utility for the medicine field since it can help to prevent, diagnose and treat several diseases. Such a capture can be done by using systems composed of inertial sensors like accelerometers and gyroscopes. These sensors are integrated together with processors, batteries and transmitters to form Inertial Measurement Units (IMUs). IMUs are placed on the different parts of the patient’s body that want to be monitored. In order to obtain accurate measurements, a calibration process and different filtering algorithms need to be applied in a strategy known as sensor fusion. Through proper signal processing, a good level of performance can be achieved for low cost systems. This fact helps the generalization of the use of IMUs in home and hospital scenarios. Telerehabilitation, Activities of Daily Life (ADL) analysis, gait and posture analysis, acute and nocturnal epileptic seizures detection, motor crisis detection, and diagnosis and monitoring of sleep disorders are some of the possible applications of human body position monitoring systems.
Index
Page: 269-271 (3)
Author: Juan M. Gorriz, Elmar W. Lang and Javier Ramirez
DOI: 10.2174/978160805218911101010269
Introduction
Biomedical signal processing is a rapidly expanding field with a wide range of applications, from the construction of artificial limbs and aids for disabilities to the development of sophisticated medical imaging systems. Acquisition and processing of biomedical signals has become more important to the physician. The main reasons for this development are the growing complexity of the biomedical examinations, the increasing necessity of comprehensive documentation and the need for automation in order to reduce costs. Analysis of signals accomplished by humans has many limitations, therefore, computer analysis of these signals could provide objective strength to diagnoses; however, the development of an algorithm for biomedical signal analysis is a significant challenge. This ebook will cover biomedical signal processing as used in both therapeutic and diagnostic instrumentation. A number of current research projects are also outlined with emphasis on intelligent medical image diagnosis. This book should be a valuable reference for researchers, professionals and technical experts working in the fields of Biomedical Signal processing (BSP) including applications, filtering and registration techniques, reconstruction and normalization algorithms, technical experts requiring an understanding of processing and analyzing biomedical signals, as well as postgraduate students working on image processing and novel signal processing paradigms applied to BSP.