Contributors
Page: iv-iv (1)
Author: Massimo Buscema and Enzo Grossi
DOI: 10.2174/9781608050420109010100iv
Abstract
Full text available
The General Philosophy of Artificial Adaptive Systems
Page: 1-4 (4)
Author: Massimo Buscema
DOI: 10.2174/978160805042010901010001
PDF Price: $15
Abstract
This chapter has the objective of describing the structure and placing in a taxonomy the Artificial Adaptive Systems (AAS). These systems form part of the vast world of Artificial Intelligence (AI) nowadays called more properly Artificial Sciences (AS). Artificial Sciences means those sciences for which an understanding of natural and/or cultural processes is achieved by the recreation of those processes through automatic models. In particular, Natural Computation tries to construct automatic models of complex processes, using the local interaction of elementary micro-processes, simulating the original process functioning. Such models organize themselves in space and time and connect in a non-linear way to the global process they are part of, trying to reproduce the complexity through the dynamic creation of specific and independent local rules that transform themselves in relation to the dynamics of the process. Natural Computation constitutes the alternative to Classical Computation (CC). This one, in fact, has great difficulty in facing natural/cultural processes, especially when it tries to impose external rules to understand and reproduce them, trying to formalize these processes in an artificial model. In Natural Computation ambit, Artificial Adaptive Systems are theories which generative algebras are able to create artificial models simulating natural phenomenon. The learning and growing process of the models is isomorphic to the natural process evolution, that is, it’s itself an artificial model comparable with the origin of the natural process. We are dealing with theories adopting the “time of development” of the model as a formal model of “time of process” itself. Artificial Adaptive Systems comprise Evolutive Systems and Learning Systems. Artificial Neural Networks are the more diffused and best-known Learning Systems models in Natural Computation.
A Brief Introduction to Artificial Neural Networks
Page: 5-11 (7)
Author: Massimo Buscema
DOI: 10.2174/978160805042010901010005
PDF Price: $15
Abstract
Artificial Neural Networks (ANNs) are often presented as powerful tools for data processing. Nevertheless, ANNs need a theory and consequently an epistemological foundation. Usually scholars in this field seem to believe that the brain theory is an implicit theory for ANNs. But, we still do not have a complete brain theory, and the brain model for ANNs has been only an “inspiration” model. Further, a theory has to explain a set of models or a set of phenomena internally. So, ANN lacks a theory. In this chapter the author presents a theory of ANNs. Three levels of different complexity are created, in order to simulate a generative path starting from the elementary units up to the more complex ones. The basic concepts of the first level are the node and the connections. At the second level the main concept is the network. At the last level the concept of an artificial organism is the key concept. Explicit rules govern the conversion from one level to another. Further, at each level, semantic and syntactic components are considered. A theory of ANNs is needed not only for didactic reasons but also for basic research: scholars need to see and consider the similarities and differences among different ANNs from a mathematical, biological and philosophical point of view. A good theory should increase the possibilities for planning, implementing and assessing necessary fundamental research.
A Brief Introduction to Evolutionary Algorithms and the Genetic Doping Algorithm
Page: 12-24 (13)
Author: Massimo Buscema and Massimiliano Capriotti
DOI: 10.2174/978160805042010901010012
PDF Price: $15
Abstract
This chapter describes the Evolutionary Algorithms that indicates a range of systems of the problems’ resolution based on the use of computer, similar to the evolutionary processes. They include, Genetic Algorithms, Evolutionary Programming, Evolutionary Strategies, Classifying Systems, Genetic Programming, and Genetic Doping Algorithm (GenD), an evolutionary algorithm, conceived by Buscema in 1998, at the Semeion Research Centre in Rome, where it is still successfully used and has been further developed. Unlike classic genetic algorithms, the GenD system maintains an inner instability during evolution, presenting a continuous evolution and a natural increase in biodiversity during the progress of the algorithm. The theory, which leads to defining the GenD system is outlined. Specific characteristics of GenD, such as the definition of a species-health aware evolutionary law, the use of genetic operators and the adoption of a structured organisation of individuals (tribes), are described. In order to measure GenD capabilities, we investigated also different problems, such as that known as the travelling sales person problem, which belongs to the class of full NP problems.
Auto-Contractive Maps, H Function and Maximally Regular Graph - Theory
Page: 25-41 (17)
Author: Massimo Buscema
DOI: 10.2174/978160805042010901010025
PDF Price: $15
Abstract
This chapter presents a new paradigm of Artificial Neural Networks (ANNs): the Auto- Contractive Maps (Auto-CMs). The Auto-CM differs from the traditional ANNs under many viewpoints: the Auto-CMs start their learning task without a random initialization of their weights, they meet their convergence criterion when all their output nodes become null, their weights matrix develops a data driven warping of the original Euclidean space, they show suitable topological properties, etc. Further two new algorithms, theoretically linked to Auto-CM are presented: the first one is useful to evaluate the complexity and the topological information of any kind of connected graph: the H Function is the index to measure the global hubness of the graph generated by the Auto-CM weights matrix. The second one is named Maximally Regular Graph (MRG) and it is a development of the traditionally Minimum Spanning Tree (MST).
Auto Contractive Maps, H Function and Maximally Regular Graph - Application
Page: 42-47 (6)
Author: Cathy Helgason, Massimo Buscema and Enzo Grossi
DOI: 10.2174/978160805042010901010042
PDF Price: $15
Abstract
In this chapter we describe a new mapping method able to find out connectivity traces among variables thanks to an artificial adaptive system, the Auto-Contractive Map (Auto-CM), able to define the strength of the associations of each variable with all the others in a dataset. After the training phase, the weights matrix of the Auto-CM represents the map of the main connections between the variables. We apply this new approach to explore the possible association of multiple variables within two different clinical studies: the African American Antiplatelet Stroke Study (AAASPS), a large clinical trial comparing the preventive effect of two different anti platelet agents for recurrent stroke, myocardial infarction and death, and a smaller study, the Aspirin Response Study (ARS), wherein the genetic predisposition to aspirin response as measured by inhibition of platelet aggregation measured ex vivo was determined in patients taking aspirin for the prevention of thrombotic vascular occlusion.
J-Net System: A New Paradigm for Artificial Neural Networks Applied to Diagnostic Imaging - Theory
Page: 48-59 (12)
Author: Massimo Buscema
DOI: 10.2174/978160805042010901010048
PDF Price: $15
Abstract
In this chapter we present a new unsupervised artificial adaptive system, able to extract features of interest in digital imaging, to reduce image noise maintaining the spatial resolution of high contrast structures and the expression of hidden morphological features. The new system, named J-Net, belongs to the family of ACM systems developed by Semeion Research Center. J-Net is able to isolate in an almost geological way different brightness layers in the same image. These layers seem to be invisible to the human eye and for the other mathematical imaging system. This ability of the J-Net can have important medical applications.
J-Net System: A New Paradigm for Artificial Neural Networks Applied to Diagnostic Imaging - Application
Page: 60-68 (9)
Author: Enzo Grossi and Massimo Buscema
DOI: 10.2174/978160805042010901010060
PDF Price: $15
Abstract
In this chapter we show how the ability of the J-Net system to extract the pictures composing it from an image, on the basis of the brightness, can have important medical applications. Two examples are shown: hidden arterial stenosis discovery in Digital Subtraction Angiography (DSA) and lung nodule characterization according to the benign or malignant feature.
First example: the popliteal artery is a relatively short vascular segment of the leg but is affected by a unique set of pathologic conditions. The most common of these conditions, is the narrowing or stenosis of the artery due to atherosclerosis. The clinical manifestations, imaging appearances, and treatment options associated with these pathologic conditions differ significantly. Consequently, the radiologist should be familiar with these conditions to direct imaging for accurate diagnosis and treatment and to prevent loss of limb. A standard DSA in a patient with popliteal artery stenosis showed just one stenosis, while after processing with J-Net, a second stenosis, which was invisible to human eye emerged bottom-up. At surgical intervention there was the confirmation that the patient was actually having another stenosis which was not visible at DSA.
Second example: the solitary lung nodule is a common radiological abnormality that is often detected incidentally. Although most solitary pulmonary nodules have benign causes, many represent early malignant lung cancers. Initial evaluation with Computed Tomography often results in non-specific findings, in which case nodules are classified as indeterminate and require further evaluation to exclude malignancy. In uncertain situations growth rate assessment remains still the only practical approach.
In order to verify this assumption we have used images published by a group of researchers in a top quality scientific journal regarding two cases in which only after two-three years it was possible to diagnose a malignant lung cancer, while at Time 0 the picture did not allow for an accurate and differential cancer diagnosis. J-Net applied to time 0 image was able to show how the image would have changed its pattern at time 1, delineating the cancer’s pattern of spread that would have occurred later on. J-Net seems to capture slight brightness intensities changes due to initial limphoangiogenesis spread driving microscopic peritumoral infiltration in malignant lung nodules.
The Topological Weighted Centroid, and the Semantic of the Physical Space - Theory
Page: 69-78 (10)
Author: Massimo Buscema, Marco Breda and Luigi Catzola
DOI: 10.2174/978160805042010901010069
PDF Price: $15
Abstract
In this chapter new mathematical objects, aimed to describe semantic aspects of set of non random points, are described together with their theoretical background. They are: Topological Weighted Centroid (TWC); Self Topological Weighted Centroid (STWC); Proximity Scalar Field; Gradient of the Scalar Field Relative Topological Weighted Centroid (TWCi); Paths form Arithmetic Centroid to entities; Paths between entities; Scalar Field of the trajectories. These new mathematical quantities are able to describe some important, semantic aspect of a set of points defined in a bi-dimensional or tri-dimensional space. These quantities can also be used also to analyze the semantic of a set of points defined in a higher dimensional space. All the proposed quantities will be defined considering a set of N points, called entities, in a two-dimensional space, the extension to three dimensions being elementary. As we will see, the proposed mathematical quantities are points, curves or scalar fields.
The Topological Weighted Centroid and the Semantic of the Physical Space - Application
Page: 79-89 (11)
Author: Enzo Grossi, Massimo Buscema and Tom Jefferson
DOI: 10.2174/978160805042010901010079
PDF Price: $15
Abstract
In this chapter several application examples derived from the literature and from the real world show how new elementary mathematics like: Topological Weighted Centroid (TWC); Self Topological Weighted Centroid (STWC); Proximity Scalar Field; Gradient of the Scalar Field Relative Topological Weighted Centroid (TWCi); Paths from Arithmetic Centroid to entities; Paths between entities; Scalar Field of the trajectories, may help decision makers in situations characterized by limited amount of information, and how mathematics of complex system can improve the level of accuracy obtained with classical statistics. In particular the TWC proposes itself as a powerful method to identify the source of epidemic spread. The impressive results obtained in the example of Russian influenza spreading in Sweden in 1889 and in the colera spreading in London in 1854 are consistent with the idea that the spread of infectious disease is not random but follows a progression which is based on inherent but as yet undiscovered mathematical laws based on probabilistic density function. These methods, which require further for field evaluation and validation, could provide an additional powerful tool for the investigation of the early stages of an epidemic, and constitute the basis of new simulation methods to understand the process through which a disease is spread.
IFAST: Implicit Function as Squashing Time for EEG Analysis∗ - Theory
Page: 90-103 (14)
Author: Massimo Buscema, Paolo Maria Rossini, Enzo Grossi and Claudio Babiloni
DOI: 10.2174/978160805042010901010090
PDF Price: $15
Abstract
This chapter presents the innovative use of special types of Artificial Neural Networks assembled in a novel methodology, capable of compressing the temporal sequence of EEG data into spatial invariants. The spatial content of the EEG voltage recorded from 19 channels along 60 seconds is extracted by a stepwise procedure using ANNs (SEMEION©). The core of the procedure is that the ANNs do not classify individuals by directly using the EEG data as an input. Rather, the data inputs for the classification are the weights of the connections within a non linear Auto-Associative ANN trained to generate the recorded EEG data. These connection weights represent an optimal model of the peculiar spatial features of the EEG patterns at scalp surface. The classification based on these weights is then performed by a supervised ANN. Half of the EEG database is used for the ANN training and the remaining EEG database serves for the automatic classification phase (testing). The best results distinguishing between mild Alzheimer patients and Mild Cognitive Impairment patient were equal to 92.33%. The comparative results obtained with the best method so far described in the literature, based on blind source separation and Wavelet pre-processing, were 80.43% ( p < 0.001). These results confirmed the working hypothesis and represent the basis for research aimed at integrating spatial and temporal information content of the EEG.
IFAST: Implicit Function as Squashing Time for EEG Analysis - Application
Page: 104-113 (10)
Author: Paolo Maria Rossini, Massimo Buscema and Enzo Grossi
DOI: 10.2174/978160805042010901010104
PDF Price: $15
Abstract
It has been shown that a new procedure (Implicit Function as Squashing Time, IFAST) based on Artificial Neural Networks (ANNs) is able to compress eyes-closed resting electroencephalographic (EEG) data into spatial invariants of the instant voltage distributions for an automatic classification of mild cognitive impairment (MCI) and Alzheimer’s disease (AD) subjects with classification accuracy of individual subjects higher than 92%. In this chapter the method has been applied to distinguish individual normal elderly (Nold) vs. Mild Cognitive Impairment (MCI) subjects, an important issue for the screening of large populations at high risk of AD. Eyes-closed resting EEG data (10-20 electrode montage) were recorded in 171 Nold and in 115 amnesic MCI subjects. The data inputs for the classification by IFAST were the weights of the connections within a non linear auto-associative ANN trained to generate the instant voltage distributions of 60-s artifact free EEG data. The most relevant features were selected and coincidently the dataset was split into two halves for the final binary classification (training and testing) performed by a supervised ANN. The classification of the individual Nold and MCI subjects reached 95.87% of sensitivity and 91.06% of specificity (93.46% of accuracy). These results indicate that IFAST can reliably distinguish eyes-closed resting EEG in individual Nold and MCI subjects, and may be used for large-scale periodic screening of large populations at risk of AD and personalized care.
Index
Page: 114-115 (2)
Author: Massimo Buscema and Enzo Grossi
DOI: 10.2174/978160805042010901010114
Abstract
Full text available
Introduction
This Ebook covers the emerging and most important theories underlying artificial intelligence applications in a variety of medical problems. It is written for physicians, researchers, engineers, statisticians and advanced students who wish to increase their familiarity with different topics of modern mathematics related to predictive medicine. Both theoretical and applied aspects are presented in this book. Many illustrative mathematical explanations and worked-out examples are given. This book features contributions by nine noted experts from leading medical centers worldwide, which should prove to be of considerable interest to the readers.