| 000 | 00684camuuu200217 a 4500 | |
| 001 | 000000922616 | |
| 005 | 19990106142910.0 | |
| 008 | 920526s1993 maua b 001 0 eng | |
| 010 | ▼a 92020864 | |
| 020 | ▼a 0262071452 | |
| 040 | ▼a DLC ▼c DLC ▼d DLC ▼d 244002 | |
| 049 | 0 | ▼l 151004383 |
| 082 | 0 0 | ▼a 006.3 |
| 090 | ▼a 006.3 ▼b G163n | |
| 100 | 1 | ▼a Gallant, Stephen I. |
| 245 | 1 0 | ▼a Neural network learning and expert systems / ▼c Stephen I. Gallant. |
| 260 | ▼a Cambridge, Mass. : ▼b MIT Press, ▼c c1993. | |
| 300 | ▼a xvi, 365 p. : ▼b ill. ; ▼c 24 cm. | |
| 500 | ▼a "A Bradford Book". | |
| 504 | ▼a Includes bibliographical references (p. [349]-359) and index. | |
| 650 | 0 | ▼a Neural networks (Computer science). |
소장정보
| No. | 소장처 | 청구기호 | 등록번호 | 도서상태 | 반납예정일 | 예약 | 서비스 |
|---|---|---|---|---|---|---|---|
| No. 1 | 소장처 세종학술정보원/과학기술실(5층)/ | 청구기호 006.3 G163n | 등록번호 151004383 (2회 대출) | 도서상태 대출가능 | 반납예정일 | 예약 | 서비스 |
컨텐츠정보
책소개
Neural Network Learning and Expert Systems is the first book to present a unified and in-depth development of neural network learning algorithms and neural network expert systems. Especially suitable for students and researchers in computer science, engineering, and psychology, this text and reference provides a systematic development of neural network learning algorithms from a computational perspective, coupled with an extensive exploration of neural network expert systems which shows how the power of neural network learning can be harnessed to generate expert systems automatically.
Features include a comprehensive treatment of the standard learning algorithms (with many proofs), along with much original research on algorithms and expert systems. Additional chapters explore constructive algorithms, introduce computational learning theory, and focus on expert system applications to noisy and redundant problems.
For students there is a large collection of exercises, as well as a series of programming projects that lead to an extensive neural network software package. All of the neural network models examined can be implemented using standard programming languages on a microcomputer.
Stephen l. Gallant taught courses in neural network learning and expert systems as Associate Professor of Computer Science at Northeastern University. He is currently a Senior Scientist at HNC, Inc.
정보제공 :
목차
CONTENTS
Foreword = xiii
Ⅰ Basics = 1
1 Introduction and Important Definitions = 3
1.1 Why Connectionist Models? = 3
1.2 The Structure of Connectionist Models = 11
1.3 Two Fundamental Models : Multilayer Perceptrons(MLP's) and Backpropagation Networks(BPN's) = 17
1.4 Gradient Descent = 19
1.5 Historic and Bibliographic Notes = 23
1.6 Exercises = 27
1.7 Programming Project = 29
2 Representation Issues = 31
2.1 Representing Boolean Functions = 31
2.2 Distributed Representations = 39
2.3 Feature Spaces and ISA Relations = 42
2.4 Representing Real-Valued Functions = 48
2.5 Example: Taxtime! = 55
2.6 Exercises = 56
2.7 Programming Projects = 59
Ⅱ Learning Single-Layer Models = 61
3 Perceptron Learning and the Pocket Algorithm = 63
3.1 Perceptron Learning for Separable Sets of Training Examples = 63
3.2 The Pocket Algorithm for Nonseparable Sets of Training Examples = 74
3.3 Khachiyan's Linear Programming Algorithm = 87
3.4 Exercises = 88
3.5 Programming Projects = 92
4 Winner-Take-All Groups or Linear Machines = 95
4.1 Generalizes Single-Cell Models = 96
4.2 Perceptron Learning for Winner-Take-All Groups = 98
4.3 The pcoket Algorithm for winner-Take-All Groups = 98
4.4 Kessler's Construction, Perceptron Cycling, and the pocket Algorithm Proof = 99
4.5 Independent Training = 102
4.6 Exercises = 103
4.7 programming Projects = 103
5 Autoassociators and One-Shot Learning = 105
5.1 Linear Autoassociators and the Outer-Product Training Rule = 105
5.2 Anderson's BSB Model = 109
5.3 Hopfield's Model = 110
5.4 The Traveling Salesman Problem = 112
5.5 The Cohen-Grossbeg Theorem = 115
5.6 Kanerva's Model = 116
5.7 Autoassociative Filtering for Feedforward Networks = 117
5.8 Concluding Remarks = 118
5.9 Exercises = 119
5.10 Programming Projects = 121
6 Mean Squared Error(MSE) Algorithm = 123
6.1 Motivation = 123
6.2 MSE Approximations = 123
6.3 The Widrow-Hoff Rule or LMS Algorithm = 125
6.4 ADALINE = 127
6.5 Adaptive Noise Cancellation = 128
6.6 Decision-Directed Learning = 129
6.7 Exercises = 131
6.8 Programming Projects = 131
7 Unsupervised Learning = 133
7.1 Introduction = 133
7.2 k-Means clustering = 134
7.3 Topology-Preserving Maps = 136
7.4 ART1 = 143
7.5 ART2 = 146
7.6 Using Clustering Algorithms for Supervised Learning = 149
7.7 Exercises = 150
7.8 Programming Projects = 151
Ⅲ Learning in Multilayer Models = 153
8 The Distributed Method and Radial Basis Functions= 155
8.1 Rosenblatt's Approach = 156
8.2 the Distributed Method = 157
8.3 Exercises = 162
8.4 How many Cells? = 163
8.5 Radial Basis Functions = 165
8.6 A Variant: The Anchor Algorithm = 167
8.7 Scaling, Multipe outputs, and parallelism = 168
8.8 Exercises = 170
8.9 Programming Projects = 171
9 Computational Learning theory and the BRD Algorithm = 173
9.1 Introduction to Computational Learning theory = 173
9.2 A Learning Algorithm for Probabilistic Bounded distributed Concepts = 178
9.3 The BRD Theorem = 180
9.4 Noisy Data and fallback Estimates = 183
9.5 Bounds for Single-Layer Algorithms = 189
9.6 Fitting Data by Limiting the Number of Iterations = 189
9.7 Discussion = 191
9.8 Exercises = 193
9.9 Programming Projects = 193
10 Constructive Algorithms = 195
10.1 The Tower and Pyramid Algorithms = 195
10.2 The Cascade-Correlation Algorithm = 198
10.3 The Tiling Algorithm = 200
10.4 The Upstart Algorithm = 201
10.5 Other Constructive Algorithms and Pruning = 203
10.6 easy Learning Problems = 205
10.7 Exercises = 208
10.8 Programming Projects = 209
11 Backpropagation = 211
11.1 The Backpropagation Algorithm = 212
11.2 Derivation = 217
11.3 Practical Considerations = 219
11.4 NP-Completeness = 224
11.5 Comments = 225
11.6 Exercises = 227
11.7 Programming Projects = 228
12 Backpropagation: Variations and Applications = 231
12.1 NETtalk = 231
12.2 Backpropagation through Time = 233
12.3 Handwritten Character Recognition = 235
12.4 Robot Manipulator with Excess Degrees of Freedom = 239
12.5 Exercises = 244
12.6 Programming Projects = 244
13 Simulated Annealing and Boltzmann Machines = 245
13.1 Simulated Annealing = 245
13.2 Boltzmann Machines = 246
13.3 Remarks = 251
13.4 Exercises = 251
13.5 Programming Project = 251
Ⅳ Neural Network Expert Systems = 253
14 Expert Systems and Neural Networks = 255
14.1 Expert Systems = 255
14.2 Neural Network Decision Systems = 264
14.3 MACIE, and an Example Problem = 267
14.4 Applicability of Neural Network Expert Systems = 281
14.5 Exercise = 282
14.6 Programming Projects = 282
15 Details of the MACIE System = 283
15.1 Inferencing and Forward Chaining = 283
15.2 Confidence Estimation = 289
15.3 Information Acquistion and Backward Chaining = 290
15.4 Concluding Comment = 291
15.5 Exercises = 291
15.6 Programming Projects = 293
16 Noise, Redundancy, Fault Detection, and Bayesian Decision Theory = 295
16.1 The High Tech Lemonade Corporation's Problem = 295
16.2 The Deep Model and the Noise Model = 298
16.3 Generating the Expert System = 300
16.4 Probabilistic Analysis = 302
16.5 Noisy Single-Pattern Boolean Fault Detection Problems = 305
16.6 Convergence Theorem = 308
16.7 Comments = 310
16.8 Exercises = 310
16.9 Programming Projects = 311
17 Extracting Rules from Networks = 315
17.1 Why Rules? = 315
17.2 What Kind of Rules? = 315
17.3 Inference Justifications = 317
17.4 Rule Sets = 322
17.5 Conventional + Neural Network Expert Systems = 325
17.6 Concluding Remarks = 328
17.7 Exercises = 329
17.8 Programming Projects = 329
Appendix : Representation Comparisons = 331
A.1 DNF Expressions and Polynomial Representability = 331
A.2 Decision Trees = 338
A.3 π - λ Diagrams = 342
A.4 Symmetric Functions and Depth Complexity = 343
A.5 Concluding Remarks = 346
A.6 Exercises = 346
Bibliography = 349
Index = 361
