| 000 | 00973camuuu200253 a 4500 | |
| 001 | 000000395575 | |
| 005 | 19970910092203.0 | |
| 008 | 960508s1995 maua b 001 0 eng | |
| 010 | ▼a 95006884 | |
| 020 | ▼a 0792395670 (acid-free paper) | |
| 040 | ▼a DLC ▼c DLC ▼d OCL | |
| 049 | ▼a ACCL ▼l 111064303 | |
| 050 | 0 0 | ▼a QA76.87 ▼b .A56 1995 |
| 082 | 0 0 | ▼a 006.3 ▼2 20 |
| 090 | ▼a 006.3 ▼b A613f | |
| 100 | 1 | ▼a Annema, Anne-Johan. |
| 245 | 1 0 | ▼a Feed-forward neural networks : ▼b vector decomposition analysis, modelling, and analog implementation / ▼c by Anne-Johan Annema. |
| 260 | ▼a Boston : ▼b Kluwer Academic Publishers, ▼c 1995. | |
| 300 | ▼a xiii, 238 p. : ▼b ill. ; ▼c 24 cm. | |
| 440 | 4 | ▼a The Kluwer international series in engineering and computer science. ▼p Analog circuits and signal processing. |
| 500 | ▼a Revision of the author's thesis (Ph. D.). | |
| 504 | ▼a Includes bibliographical references and index. | |
| 650 | 0 | ▼a Neural networks (Computer science). |
Holdings Information
| No. | Location | Call Number | Accession No. | Availability | Due Date | Make a Reservation | Service |
|---|---|---|---|---|---|---|---|
| No. 1 | Location Centennial Digital Library/Stacks(Preservation8)/ | Call Number 006.3 A613f | Accession No. 111064303 | Availability Available | Due Date | Make a Reservation | Service |
Contents information
Book Introduction
Feed-Forward Neural Networks: Vector Decomposition Analysis, Modelling and Analog Implementation presents a novel method for the mathematical analysis of neural networks that learn according to the back-propagation algorithm. The book also discusses some other recent alternative algorithms for hardware implemented perception-like neural networks. The method permits a simple analysis of the learning behaviour of neural networks, allowing specifications for their building blocks to be readily obtained.
Starting with the derivation of a specification and ending with its hardware implementation, analog hard-wired, feed-forward neural networks with on-chip back-propagation learning are designed in their entirety. On-chip learning is necessary in circumstances where fixed weight configurations cannot be used. It is also useful for the elimination of most mis-matches and parameter tolerances that occur in hard-wired neural network chips.
Fully analog neural networks have several advantages over other implementations: low chip area, low power consumption, and high speed operation.
Feed-Forward Neural Networks is an excellent source of reference and may be used as a text for advanced courses.
Feed-Forward Neural Networks: Vector Decomposition Analysis, Modelling and Analog Implementation presents a novel method for the mathematical analysis of neural networks that learn according to the back-propagation algorithm. The book also discusses some other recent alternative algorithms for hardware implemented perception-like neural networks. The method permits a simple analysis of the learning behaviour of neural networks, allowing specifications for their building blocks to be readily obtained.
Starting with the derivation of a specification and ending with its hardware implementation, analog hard-wired, feed-forward neural networks with on-chip back-propagation learning are designed in their entirety. On-chip learning is necessary in circumstances where fixed weight configurations cannot be used. It is also useful for the elimination of most mis-matches and parameter tolerances that occur in hard-wired neural network chips.
Fully analog neural networks have several advantages over other implementations: low chip area, low power consumption, and high speed operation.
Feed-Forward Neural Networks is an excellent source of reference and may be used as a text for advanced courses.
Information Provided By: :
Table of Contents
CONTENTS
Foreword = ix
Acknowledgements = xi
1 Introduction = 1
1.1 Neural networks = 1
1.2 Feed-Forward Networks = 6
Architecture of feed-forward neural networks = 6
Applications for feed-forward neural networks = 9
Capabilities of feed-forward neural networks: some theorems = 10
1.3 Back-Propagation = 16
1.4 Realizations of feed-forward networks = 17
1.5 Outline of the book = 20
1.6 References = 22
2 The Vector Decomposition Method = 27
2.1 Introduction = 27
2.2 The basics of the VDM = 29
2.3 Some notations and definitions = 30
2.4 The VDM in more detail = 33
Decomposition basics = 33
The actual vector decomposition = 34
Quantification of vector components = 35
An illustration = 36
The neuron response = 36
2.5 A summary of the VDM = 37
2.6 References = 37
3 Dynamics of Single Layer Nets = 39
3.1 Introduction = 39
3.2 Weight vector adaptation with the VDM = 42
Weight adaptation of one meuron with the VDM = 42
Average adaptation of βh and βb ias = 43
Adaptation of = 43
3.3 The effect of the learning rate on learning = 46
3.4 The effect of scaling ○ and ○ = 47
3.5 The effect of bias-input signal on learning: simple case = 48
3.6 The effect of bias-input signal on learning: general case = 51
3.7 Conclusions = 55
3.8 References = 56
4 Unipolar Input Signals in Single-Layer Feed-Forward Neural Networks = 57
4.1 Introduction = 57
4.2 Translations towards unipolar input signals = 58
Centre-of-gravity = 59
Minimum training time for fixed learning rate ○ = 59
Minimum training time, including scaling of ○ = 60
Discussion = 61
4.3 References = 61
5 Cross-talk in Single-Layer Feed-Forward Neural Networks = 63
5.1 Introduction = 63
5.2 Coupling between input signals = 64
Analysis of the effect of coupling = 64
5.3 Degradation of learning due to coupling = 68
5.4 Types of coupling = 69
Capacitive coupling = 69
Resistive coupling = 69
Additive coupling = 69
5.5 Calculation & simulation results = 70
5.6 Discussion = 73
5.7 References = 74
6 Precision Requirements for Analog Weight Adaptation Circuitry for Single-Layer Nets = 75
6.1 Introduction = 75
6.2 The cause and the model of analog imprecision = 76
6.3 Estimation of MSE-increment due to imprecision = 77
Basic analysis = 77
The effect on the MSE = 78
An illutration = 79
6.4 The effect on correctly classified examples = 80
6.5 Rule of thumb = 82
The condition for negligibly small effect of parasitic weight adaptation = 83
6.6 worst-case estimation of precision requirements = 85
6.7 Estimation of minimum weight-storage C size = 86
6.8 Conclusions = 87
6.9 References = 87
Appendix 6.1: Derivation of equation(6.3) = 88
Appendix 6.2: Approximation of error distribution = 89
7 Discretization of Weight Adaptations in Single-Layer Nets = 91
7.1 Introduction = 91
7.2 Basics of discretized weight adaptations = 92
7.3 Performance versus quantization : asymptotical = 93
A simple case = 93
A less simple case = 95
A general case = 97
7.4 Worst-case estimation of quantization steps = 101
A simple case = 101
A less simple case = 103
A general case = 104
7.5 Estimation of absolute minimum weight-storage C size = 105
7.6 Conclusions = 106
7.7 References = 106
8 Learning Behavior and Temporary Minima of Two-Layer Neural Networks = 107
8.1 Introduction = 107
8.2 A summary = 110
The network and the notation = 110
Back-propagation rule = 111
Vector decomposition = 112
Preview of the analyses = 113
8.3 Analysis of temporary minima: introduction = 115
Initial training: a linearized network = 116
Continued training: including network non-linearities = 120
8.4 Rotation-based breaking = 121
Discussion = 123
8.5 Rotation-based breaking: an illustrative example = 127
8.6 Translation-based breaking = 135
8.7 Translation-based breaking: an illustrative example = 138
8.8 Extension towards larger networks = 141
8.9 Conclusions = 144
8.10 References = 144
9 Biases and Unipolar Input signals for Two-Layer Neural Networks = 147
9.1 Introduction = 147
9.2 Effect of the first layer's bias-input signal on learning = 148
Learning behavior: a recapitulation = 149
First layer's bias input versus adaptation in the \Bh direction = 151
Relation between first layer's bias input and temporary minima = 152
Overall conclusion = 154
An illustration = 155
9.3 Effect of the second layer's bias signal on learning = 156
Second layer's bias input versus adaptation in the \Bah direction = 156
Relation between second layer's bias input and temporary minima = 157
Conclusions = 159
An illustration = 160
9.4 Large neural network: a problem and a solution = 161
9.5 Unipolar input signals = 165
9.6 References = 166
10 Cost functions for Two-Layer Neural Networks = 167
10.1 Introduction = 167
10.2 Discussion of "Minkowski-r back-propagation" = 168
Making an "initial guess" = 168
Analysis of the training time required to reach minima = 169
Analysis of'sticking' time in temporary minima = 170
An illustration = 172
10.3 Switching cost functions = 172
10.4 Classification performances using non-MSE cost-function = 175
10.5 Conclusions = 175
10.6 References = 176
11 Some issues for f'(x) = 177
11.1 Introduction = 177
11.2 Demands on the activation function for single-layer nets = 178
11.3 Demands on the activation function for two-layer nets = 180
12 Feed-forward hardware = 187
12.1 Introduction = 187
12.2 Normalization of signals in the network = 188
12.3 Feed-forward hardware: the synapses = 193
Requirements = 193
The synapse circuit = 196
12.4 Feed-forward hardware:the activation function = 199
12.5 Conclusions = 203
12.6 References = 203
Appendix 12.1 : Neural multipliers: overview = 204
Appendix 12.2 : Neural activation functions: overview = 210
13 Analog weight adaptation hardware = 215
13.1 Introduction = 215
13.2 Multiplier:the basic idea = 215
13.3 Towards a solution = 218
13.4 The weight-update multiplier = 221
13.5 Simulation results = 222
13.6 Reduction of charge injection = 223
13.7 Conclusions = 228
13.8 References = 228
14 Conclusions = 229
14.1 Introduction = 229
14.2 Summary = 230
14.3 Original contributions = 231
14.4 Recommendations for further research = 231
Index = 235
Nomenclature = 237
