CONTENTS
Preface = v
Chapter 1 : Introduction = 1
1.1 Introduction = 2
1.2 Major Aspects of PDP Models = 9
1.3 What Follows = 14
1.4 Literature Overview = 16
Chapter 2 : Associative Memory = 19
2.1 Introduction = 20
2.2 Representation of Information by Collective States = 21
2.3 Basic Mathematical Model = 22
2.4 Associative Recall = 24
2.4.1 Mathematical Modeling of Associative Recall = 25
2.4.2 The Novelty Component Approach = 28
2.4.3 A Stochastic Approach to Recall in Large-Scale Operations = 29
2.5 Memory as an Adaptive Filter = 31
2.5.1 Biological Basis for Model = 32
2.5.2 Mathematical Model = 34
2.6 The Sutton-Barto Model = 35
2.7 The Heterostat = 38
2.7.1 Mathematical Model = 41
2.7.2 Comparison of the Heterostat with Alternative Neuronal Models = 44
2.8 Literature Overview = 46
Chapter 3 : The perceptron = 49
3.1 Introduction = 50
3.2 The Structure of a Percepton = 50
3.3 The Parallel Nature of the Perceptron's Computation Process = 54
3.4 Basic Mathematics of Decision Surfaces = 58
3.4.1 The Linear Machine = 59
3.4.2 Gradient Descent Techniques = 62
3.5 The Perceptron Covergence Theorem = 64
3.6 Scope of the Decision Surface Methodology = 69
3.7 Comparison ofthe perceptron with Other Learning Models = 74
3.8 Literture Overview = 82
Chapter 4 : The Delta Rule and Learning by Back Propagation = 85
4.1 Introduction = 86
4.2 The Delta Rule (Widrow-Hoff Rele) = 86
4.2.1 Change of Basis = 87
4.2.2 Gradient Descent in the Ordinary Delta Rule = 89
4.2.3 Extension of the Delta Rule to Statistical Learning = 90
4.3 The Generalized Delta Rule: Learning by Back-propagation = 92
4.4 Applications of Learning by Back-propagation = 97
4.4.1 Bond Rating = 97
4.5 Intractability of network learning = 97
4.6 Literature overview = 99
Chapter 5: Some Learning Paradigms = 101
5.1 Introduction = 102
5.2 The Competitve Learning Paradigm = 102
5.2.1 The place of Competitve Learning Among other learning Paradigms = 104
5.2.2 Architectural Framework = 104
5.3 The Linsker Model: An Example of Competitve Learning = 107
5.3.1 Mathematical Modeling = 107
5.3.2 Principle Component Analysis = 111
5.3.3 Shannon Information and the Infomax Principle = 113
5.4 The Fukushima Models : Another Example of Competitive Learning = 116
5.4.1 Implications of a Modified Hebbian Rule = 116
5.4.2 The Cognitron = 117
5.4.3 The Neocognitron = 119
5.5 The Interactive Activation Paradigm = 120
5.6 Comparison of the Competitive Learning and Interactive Activation Paradigms = 125
5.7 Adaptive Resonance Theory: A Stabilized Version of Competitve Learning = 126
5.8 Comparison of Adaptive Resonance and Resonance and Learning-by-Back- Propagation Paradigms = 131
5.9 Literature Overview = 134
Chapter 6 : The Hopfield and Hoppensteadt models = 135
6.1 Introduction = 136
6.2 The Hopfield-Tank Modle = 137
6.2.1 Biological Background = 137
6.2.2 Electronic Implementation = 138
6.2.3 Discrete versus Continuous-Valued Neural Elements = 140
6.2.4 The Minimal Energy Concept: The Motivation Behind Network Dynamics = 143
6.2.5 Application of Hopfield Nets-The Traveling Salesman Problem = 143
6.2.6 Application of Hopfield Nets-Problems in Vision = 148
6.2.7 Reduction of Oscillatory Phenomena in Hopfield Nets = 150
6.2.8 Representation of Numbers in Neural Space = 152
6.2.9 Computational and Programming Complexity = 154
6.2.10 Optical Implementation of Neurel Networks = 156
6.3 The Hoppensteadt Model = 158
6.3.1 Introduction = 158
6.3.2 A Few Words on Neuron Physiology = 161
6.3.3 VCON: A Voltage-Controlled Oscillator Analog of a Neuron = 162
6.3.4 Clocks and Phase Locking = 164
6.4 Some General Comments on Relaxation Searches = 170
6.5 The Boltzmann Machine Learning Algorithm = 174
6.6 Literature Overview = 176
Glossary = 177
Bibliography = 185
Index = 191