| 000 | 00945camuu2200289 a 4500 | |
| 001 | 000000885797 | |
| 005 | 20040628160915 | |
| 008 | 930303s1993 maua b 001 0 eng | |
| 010 | ▼a 93010027 | |
| 015 | ▼a GB93-65994 | |
| 020 | ▼a 0262140543 | |
| 040 | ▼a DLC ▼c DLC ▼d UKM ▼d 211009 | |
| 049 | ▼a KUBA ▼l 111290779 | |
| 050 | 0 0 | ▼a QA76.87 ▼b .N5 1993 |
| 082 | 0 0 | ▼a 006.4/2 ▼2 21 |
| 090 | ▼a 006.42 ▼b N689n | |
| 100 | 1 | ▼a Nigrin, Albert. |
| 245 | 1 0 | ▼a Neural networks for pattern recognition / ▼c Albert Nigrin. |
| 260 | ▼a Cambridge, Mass. : ▼b MIT Press, ▼c c1993. | |
| 300 | ▼a xvii, 413 p. : ▼b ill. ; ▼c 24 cm. | |
| 500 | ▼a "A Bradford book." | |
| 504 | ▼a Includes bibliographical references (p. [399]-405) and index. | |
| 650 | 0 | ▼a Neural networks (Computer science) |
| 650 | 0 | ▼a Pattern recognition systems. |
| 650 | 0 | ▼a Self-organizing systems. |
| 653 | 0 | ▼a Image processing ▼a Use of ▼a Computers |
소장정보
| No. | 소장처 | 청구기호 | 등록번호 | 도서상태 | 반납예정일 | 예약 | 서비스 |
|---|---|---|---|---|---|---|---|
| No. 1 | 소장처 중앙도서관/서고6층/ | 청구기호 006.42 N689n | 등록번호 111290779 (1회 대출) | 도서상태 대출가능 | 반납예정일 | 예약 | 서비스 |
| No. 2 | 소장처 세종학술정보원/과학기술실(5층)/ | 청구기호 006.42 N689n | 등록번호 151004233 | 도서상태 대출가능 | 반납예정일 | 예약 | 서비스 |
| No. | 소장처 | 청구기호 | 등록번호 | 도서상태 | 반납예정일 | 예약 | 서비스 |
|---|---|---|---|---|---|---|---|
| No. 1 | 소장처 중앙도서관/서고6층/ | 청구기호 006.42 N689n | 등록번호 111290779 (1회 대출) | 도서상태 대출가능 | 반납예정일 | 예약 | 서비스 |
| No. | 소장처 | 청구기호 | 등록번호 | 도서상태 | 반납예정일 | 예약 | 서비스 |
|---|---|---|---|---|---|---|---|
| No. 1 | 소장처 세종학술정보원/과학기술실(5층)/ | 청구기호 006.42 N689n | 등록번호 151004233 | 도서상태 대출가능 | 반납예정일 | 예약 | 서비스 |
컨텐츠정보
책소개
In a simple and accessible way it extends embedding field theory into areas of machine intelligence that have not been clearly dealt with before.
Neural Networks for Pattern Recognition takes the pioneering work in artificial neural networks by Stephen Grossberg and his colleagues to a new level. In a simple and accessible way it extends embedding field theory into areas of machine intelligence that have not been clearly dealt with before. Following a tutorial of existing neural networks for pattern classification, Nigrin expands on these networks to present fundamentally new architectures that perform realtime pattern classification of embedded and synonymous patterns and that will aid in tasks such as vision, speech recognition, sensor fusion, and constraint satisfaction. Nigrin presents the new architectures in two stages. First he presents a network called Sonnet 1 that already achieves important properties such as the ability to learn and segment continuously varied input patterns in real time, to process patterns in a context sensitive fashion, and to learn new patterns without degrading existing categories. He then removes simplifications inherent in Sonnet 1 and introduces radically new architectures. These architectures have the power to classify patterns that may have similar meanings but that have different external appearances (synonyms). They also have been designed to represent patterns in a distributed fashion, both in short-term and long-term memory.
In a simple and accessible way it extends embedding field theory into areas of machine intelligence that have not been clearly dealt with before.
정보제공 :
목차
CONTENTS
Preface = xiii
Acknowledgments = xix
1 Introduction = 1
1.1 Problems with rule based approaches = 2
1.2 The structure and Design of neural networks = 4
1.2.1 A Simple neural network for template matching = 4
1.2.2 Definition of neural networks = 11
1.2.3 Design philosophy = 17
1.3 Properties thar networks should achieve = 20
1.4 Overview of the book = 35
2 Highlights of Adaptive Resonance Theory = 39
2.1 Spatial Mechanisms = 40
2.1.1 Short term memory equations = 40
2.1.2 Unbiased pattern storage in STM = 43
2.1.3 Outstar learning = 53
2.1.4 Instar learning = 56
2.1.5 Creating a field of instars = 59
2.1.6 Stabilizing instar learning by using feedback = 64
2.1.7 Generalizations = 68
2.1.8 Proof of convergence = 69
2.2 Using spatial mechanisms to code temporal patterns = 70
2.2.1 Storing sequences with decreasing activation = 70
2.2.2 The LTM invariance principle = 73
2.2.3 Using rehearsal to delete classified items = 77
2.2.4 Guidelines for performing context sensitive recognition = 79
2.2.5 Masking fields = 83
2.2.6 Computer simulations of the masking field = 85
2.3 Extensions to masking fields = 93
3 Classifying Spatial Patterns = 99
3.1 Practical reasons for achieving several properties = 103
3.1.1 The formation of stable category codes = 104
3.1.2 Real-time operation = 107
3.1.3 Learning and classifying embedded patterns = 108
3.2 Normalizing the output of F(1) = 111
3.2.1 Normalizing by length = 111
3.2.2 Normalizing at each F(2) cell = 113
3.3 The structure of the F(2) cell assembly = 115
3.4 STM activation equations after learning = 118
3.5 Overview of learning = 124
3.5.1 Learning isolated and embedded patterns = 124
3.5.2 Forming stable category codes = 126
3.6 The excitatory LTM learning equation = 130
3.7 Analysis of the excitatory LTM learning equation = 132
3.7.1 Simplifications made in the analysis = 133
3.7.2 Competition between weights at the same cell = 135
3.7.3 Competition between weights at the different cells = 138
3.7.4 Analyzing different choices for m and η = 139
3.8 Determining which inputs are in an F(2) cell's classified pattern = 142
3.9 General approach to learning inhibitory connections = 145
3.10 Multiplexing two values on each output signal = 147
3.11 Regulating inhibitory learning to allow the network to form stable category codes = 150
3.12 Freezing inhibitory weights to prevent oscillation = 153
3.13 Learning cell sizes = 154
3.14 Modifications to improve learning = 155
3.14.1 Restricting Weight decay = 155
3.14.2 Limiting inhibition to uncommitted cells = 157
3.14.3 Resetting uncommitted cells with low activity = 159
3.14.4 Using γji to modulate the learning rate = 162
3.15 Simulations = 166
3.16 Properties mot achieved by SONNET 1 = 177
3.16.1 Combining inhibitory signals multiplicatively = 180
3.16.2 Using a separate vigilance for each link = 181
3.16.3 Improving the calculation for Ii x = 183
4 Classifying Temporal Patterns = 187
4.1 Constraints on the LTM invariance principle = 191
4.2 Implementing the LTM invariance principle with an on-center off-surround circuit = 194
4.3 Modifying the feedback weights = 198
4.4 Resetting classified portions of the input pattern = 201
4.5 Resetting F(1) cells when the field saturates = 204
4.6 Simulations = 209
5 Multilayer Networks and the Use of Attention = 217
5.1 Combining previous circuits to create homologous fields = 219
5.2 Using feedback to prevent categories from forming at inappropriate times = 222
5.3 Using attention to allow the network to be event driven = 223
5.4 Classifying items presented at different rhythms = 227
5.5 Representing different dimensions of input patterns in different fields = 232
5.6 Eliminating the lockstep operation of the network = 235
5.7 Adding an attentional reset mechanism to SONNET = 237
5.8 Modifying the network for recurrent operation = 238
6 Representing synonyms = 241
6.1 Using multiple representations for each item = 242
6.2 Forcing multiple nodes to learn each pattern = 245
6.3 Current networks do not represent synonyms well = 248
6.4 The use of presynaptic inhibition allows synonyms to be properly handled = 252
6.5 Summary of the first segment in the chapter and a preview of the second segment = 255
6.6 Learning presynaptic inhibition = 256
6.7 Single trial learning of synonymous representations = 257
6.8 Manner by which synonyms become associated = 265
6.9 Multiple links from each F(1) node to each F(2) node = 267
6.10 Summary of the second segment in the chapter and a preview of the third segment = 272
6.11 Presynaptic inhibition for intercell competition = 274
6.12 Inhibitory weights in the presynaptic connections = 279
6.13 Presynaptic inhibition in the feedback links = 280
6.14 Summary of the third segment in the chapter and a preview of the chapter's final sections = 284
6.15 Distortion insensitive recognition = 284
6.16 Classification of spatial patterns = 287
6.17 Ctrating distributed representations = 289
6.17.1 Hardware needed for local representations = 291
6.17.2 Reducing hard ware requirements = 292
6.17.3 STM binding of distributed classifications = 295
6.17.4 LTM binding of distributed classifications = 300
6.17.5 Learning distributed representations = 304
6.18 Reducing the number of connections required = 306
6.18.1 Reducing wasted lateral connections = 307
6.18.2 Transmitting information via shared links = 311
7 Specific Architectures That Use Presynaptic Inhibition = 317
7.1 First step in implementing SONNET 2 = 317
7.1.1 Minimal SONNET 2 architecture = 318
7.1.2 The operation of the network after learning = 322
7.1.3 New formulation for the function Ii x = 326
7.1.4 Using the network to achieve learning = 329
7.2 Translation and size invariant recognition = 334
7.2.1 Architecture to center objects = 336
7.2.2 Extensions for multiple input dimensions and multiple feature types = 343
7.2.3 Achieving size invariant recognition = 348
7.2.4 Extensions to allow the centering of objects surrounded by extraneous information = 353
7.3.5 Extensions to allow the simultaneous classification of multiple objects = 355
8 Conclusion = 361
A Feedforward Circuits for Normalization and Noise Suppression = 375
B Network Equations Used in the Simulations of Chapter 3 = 381
C Network Equations Used in the Simulations of Chapter 4 = 387
D Glossary = 395
Bibliography = 399
Index = 407
