CONTENTS
Preface = xi
1. Why build neural networks in analogue VLSI? = 1
1.1 Introduction = 1
1.2 Hopfield memories - the first generation of neural network VLSI = 1
1.3 Pattern classification using neural networks = 4
1.3.1 Single-layer networks = 5
1.3.2 Multi-layer perceptrons = 8
1.3.3 Conclusion = 10
1.4 Why build neural networks in silicon? = 11
1.5 Computational requirement = 13
1.5.1 Digital or analogue? = 13
2. Neural VLSI - A review = 15
2.1 Introduction = 15
2.2 MOSFET equations - a crash course = 15
2.3 Digital accelerators = 19
2.4 Op-amps and resistors - a final look = 21
2.5 Subthreshold circuits for neural networks = 22
2.6 Analogue/digital combinations = 24
2.7 MOS transconductance multiplier = 25
2.8 MOSFET analogue multiplier = 26
2.9 Imprecise low-area 'multiplier' = 27
2.10 Analogue, programmable - Intel Electronically-Trainable Artificial Neural Network(ETANN) chip = 27
2.11 Conclusion = 28
3 Analogue synaptic weight storage = 29
3.1 Introduction = 29
3.2 Dynamic weight storage = 29
3.3 MNOS(Metal Nitride Oxide Silicon) networks = 30
3.4 Floating-gate technology = 33
3.5 Amorphous silicon(a-Si) synapses = 35
3.5.1 Forming at highter temperatures = 36
3.5.2 Deposition of metal during a-Si growth = 36
3.5.3 Investigation of the forming process = 37
3.5.4 Programming technology = 37
4 The pulse stream technique = 38
4.1 Introduction = 38
4.2 Pulse encoding of information = 39
4.2.1 Pulse amplitude modulation = 41
4.2.2 Pulse width modulatin = 41
4.2.3 Pulse frequency modulation = 43
4.2.4 Phase or delay modulation = 43
4.2.5 Noise, robustness, accuracy and speed = 43
4.3 Pulse stream arithmetic - addition and multiplication = 44
4.3.1 Addition of pulse stream signals = 44
4.3.2 Multiplication of pulse stream signals = 47
4.3.3 Interfacing to addition = 49
4.4 Pulse stream communication = 49
4.4.1 Asynchronous intercommunication using pulse time information = 51
4.5 Conclusions = 53
5 Pulse stream case studies = 54
5.1 Overall introduction to case studies = 54
5.1.1 Introduction - Edinburgh SADMANN/EPSILON work = 54
5.2 The EPSILON (Edinburgh Pulse-Stream Implementation of a Learning-Oriented Network) chip = 55
5.3 Process invariant summation and multiplication - the synapse = 55
5.3.1 The transconductance multiplier = 56
5.3.2 A synapse based on distributed feedback = 58
5.3.3 The feedback operational amplifier = 61
5.3.4 A voltage integrator = 61
5.3.5 The complete system = 63
5.4 Pulse frequency modulation neuron = 64
5.4.1 A Pulse stream neuron with electrically adjustable gain = 66
5.5 Pulse width modulation neuron = 67
5.6 Swiched-capacitor design = 69
5.6.1 Weight linearity = 70
5.6.2 Weight storage time = 70
5.6.3 Accuracy of computation = 71
5.7 Per-pulse computation = 71
5.7.1 Design overview = 72
5.7.2 Input stage = 73
5.7.3 Synapse = 73
5.7.4 Summation neuron = 74
5.7.5 Sigmoid function = 75
5.7.6 Pulse regeneration = 75
5.7.7 SPICE simulation = 75
5.7.8 Results from test chips = 76
5.7.9 Synapse linearity = 78
5.7.10 Input sample and hold = 78
5.7.11 Sigmoid transfer function = 81
5.7.12 Output pulse stream generation = 82
5.7.13 Weight precision = 83
5.7.14 Weight update = 84
5.7.15 Per-Pulse Computation - Summary = 84
5.8 EPSILON - The chosen neuron/synapse cells, and results = 85
5.8.1 The EPSILON design = 86
5.8.2 Synapse = 87
5.8.3 Neurons = 88
5.8.4 EPSILON specification = 90
5.8.5 Application - vowel classification = 91
5.9 Conclusions = 92
6 Application examples = 94
6.1 Introduction = 94
6.2 Real-time speech recognition = 94
6.3 Applications of neural VLSI = 96
6.4 Applications of neural VLSI - dedicated systems = 96
6.4.1 Path planning = 98
6.4.2 Localization = 99
6.4.3 Obstacle detection/avoidance = 101
6.4.4 Conclusion = 102
6.5 Applications of neural VLSI - hardware co-processors = 102
6.6 Applications of neural VLSI - embedded neural systems = 103
6.7 Conclusion = 103
7 The future = 104
7.1 Introduction = 104
7.2 Hardware learning with multi-layer perceptrons = 105
7.3 The top-down approach : Virtual Targets = 106
7.3.1 'Virtual Targets' Method - In an I:J:K MLP network = 107
7.3.2 Experimental results = 108
7.3.3 Implementation = 115
7.4 The bottom-up approach: weight perturbation = 116
7.5 Test problem = 117
7.6 Weight perturbation for hardware learning = 119
7.7 Back-propagation revisited(for the final time?) = 121
7.8 Conclusion = 124
7.9 Noisy synaptic arithmetic - an analysis = 125
7.9.1 Mathematical predictions = 126
7.9.2 Simulations = 127
7.9.3 Prediction/verification = 129
7.9.4 Generalization ability = 129
7.9.5 Learning trajectory = 132
7.10 Noise in training - some conclusions = 133
7.11 On-chip learning - conclusion = 134
References = 136
Index = 142