| 000 | 00811camuuu200241 a 4500 | |
| 001 | 000000923018 | |
| 005 | 19990122113958.0 | |
| 008 | 941104s1995 nyua b 001 0 eng | |
| 010 | ▼a 94043390 | |
| 020 | ▼a 0471105880 (pbk.) | |
| 040 | ▼a DLC ▼c DLC ▼d DLC ▼d 244002 | |
| 049 | 0 | ▼l 151033839 |
| 050 | 0 0 | ▼a QA76.87 ▼b .M367 1995 |
| 082 | 0 0 | ▼a 006.3 ▼2 20 |
| 090 | ▼a 006.3 ▼b M423a | |
| 100 | 1 | ▼a Masters, Timothy. |
| 245 | 1 0 | ▼a Advanced algorithms for neural networks : ▼b a C++ sourcebook / ▼c Timothy Masters. |
| 260 | ▼a New York : ▼b Wiley, ▼c 1995. | |
| 300 | ▼a xiv, 431 p. : ▼b ill. ; ▼c 24 cm. + ▼e 1 computer disk (3 1/2 in.). | |
| 504 | ▼a Includes bibliographical references (p. 407-425) and index. | |
| 650 | 0 | ▼a Neural networks (Computer science). |
| 650 | 0 | ▼a Computer algorithms. |
| 650 | 0 | ▼a C++ (Computer program language). |
소장정보
| No. | 소장처 | 청구기호 | 등록번호 | 도서상태 | 반납예정일 | 예약 | 서비스 |
|---|---|---|---|---|---|---|---|
| No. 1 | 소장처 세종학술정보원/과학기술실(5층)/ | 청구기호 006.3 M423a | 등록번호 151033839 (6회 대출) | 도서상태 대출가능 | 반납예정일 | 예약 | 서비스 |
컨텐츠정보
책소개
A valuable working resource for anyone who uses neural networks to solve real-world problems
This practical guide contains a wide variety of state-of-the-art algorithms that are useful in the design and implementation of neural networks. All algorithms are presented on both an intuitive and a theoretical level, with complete source code provided on an accompanying disk. Several training algorithms for multiple-layer feedforward networks (MLFN) are featured. The probabilistic neural network is extended to allow separate sigmas for each variable, and even separate sigma vectors for each class. The generalized regression neural network is similarly extended, and a fast second-order training algorithm for all of these models is provided. The book also discusses the recently developed Gram-Charlier neural network and provides important information on its strengths and weaknesses. Readers are shown several proven methods for reducing the dimensionality of the input data.
Advanced Algorithms for Neural Networks also covers:
- Advanced multiple-sigma PNN and GRNN training, including conjugate-gradient optimization based on cross validation
- The Levenberg-Marquardt training algorithm for multiple-layer feedforward networks
- Advanced stochastic optimization, including Cauchy simulated annealing and stochastic smoothing
- Data reduction and orthogonalization via principal components and discriminant functions
- Economical yet powerful validation techniques, including the jackknife, the bootstrap, and cross validation
- Includes a complete state-of-the-art PNN/GRNN program, with both source and executable code
정보제공 :
저자소개
티모시 마스터즈(지은이)
수리 통계학 분야에서 수치 계산(numerical computing) 전공으로 박사 학위를 받았다. 그 이후 독립적인 컨설턴트로서 정부 및 산업 기관과 함께 지속적인 업무 경력을 쌓았다. 초기 연구 분야는 고고도(high-altitude) 촬영 사진에서 자동으로 특징(feature)을 추출하는 기능과 관련된 것들이며, 홍수와 가뭄 예측, 숨겨진 미사일 저장탑 탐지, 위협적인 군사용 차량 확인 등의 다양한 애플리케이션들을 개발했다. 그 후에는 침생검(needle biopsies)상에서 유익한 세포와 유해한 세포를 구별해내는 알고리즘 개발을 위해 의료 연구원으로 근무했다. 이후 12년 동안 주로 자동화된 금융 거래 시스템을 평가하기 위한 알고리즘을 개발했다. 지금까지 예측 모델을 실무에 적용하는 방법에 대한 내용으로 『Practical Neural Network Recipes in C++』(Academic Press, 1993), 『Signal and Image Processing with Neural Networks』(Wiley, 1994), 『Advanced Algorithms for Neural Networks』(Wiley, 1995), 『Neural, Novel, and Hybrid Algorithms for Time Series Prediction』(Wiley, 1995), 『Assessing and Improving Prediction and Classification』(CreateSpace, 2013), 『C++와 CUDA C로 구현하는 딥러닝 알고리즘 Vol.1』(에이콘, 2016), 『C++와 CUDA C로 구현하는 딥러닝 알고리즘 Vol.3』(에이콘, 2016) 등을 저술했다. 이 책에서 활용하는 코드는 그의 홈페이지 (TimothyMasters.info)에서 다운로드할 수 있다.
목차
CONTENTS 1. Deterministic Optimization = 1 Traditional Backpropagation = 2 An Advantage of Steepest descent = 8 Line Minimization = 8 Refining the Interval = 18 Incorporating Derivative Information = 26 Conjugate Gradient Methods = 31 Levenberg-Marquardt Learning = 47 Code for Levenberg-Marquardt Learning = 57 2. Stochastic Optimization = 73 Overview of Simulated Annealing = 74 Primitive Simulated Annealing = 76 Refinements = 77 Code for Primitive Annealing = 79 Conventional and Advanced Simulated Annealing = 83 The Details = 92 Code for General Simulated Annealing = 96 Usage Guidelines = 101 Stochastic Smoothing = 103 Random Perturbation = 112 Code for Perturbing a Point = 113 Generating Uniform Random Numbers = 116 Chopping, Stacking, and Shuffling = 118 Normally Distributed Random Numbers = 125 Cauchy Random Vectors = 127 A final Thought = 133 3. Hybrid Training Algorithms = 135 Simple Alternation = 136 Stochastic Smoothing with Gradient Hints = 144 4. Probabilistic Neural Networks Ⅰ : Introduction = 157 Foundations of the PNN = 158 PNN versus MLFN versus Traditional Statistics = 161 Bayes Classification = 162 Parzen's Method of Density Estimation = 163 Multivariate Extension of Parzen's Method = 170 The Original PNN = 171 Computation in the PNN = 173 Code for Computing PNN Classification = 176 Optimizing Sigma = 177 Accelerating the Basic PNN = 190 Bayesian Confidence Measures = 192 5. Probabilistic Neural Networks Ⅱ : Advanced Techniques = 193 Different Variables Rate Different Sigmas = 194 A Continuous Error Criterion = 197 Derivatives of the Error Function = 201 Incorporating Prior Probabilities = 204 Efficient Computation = 205 Classes May Deserve Their Own Sigmas, Too = 212 Optimizing Multiple-Sigma Models = 220 6. Generalized Regression = 223 Review of Ordinary Regression = 224 Simple Linear Regression = 226 Multiple Regression = 227 Polynomial Regression = 230 The General Regression Neural Network = 234 An Intuitive Approach = 237 Donald Specht's GRNN Architecture = 239 Computing the Gradient = 240 The GRNN in Action = 246 7. The Gram-Charlier Neural Network = 251 Structure and Overview of Functionality = 253 Motivation = 256 Series Expansions of Densities and Distributions = 258 Hermite Polynomials and Normal Density Derivatives = 259 An Alternative Representation of the Density = 262 Computing Hermite Polynomials = 263 Computing the Coefficients = 263 Finding the Coefficients from a Sample = 266 What's Wrong with this Picture? = 270 Other Problems = 272 Edgeworth's Expansion = 273 Mathematics of the Edgeworth Expansion = 275 Code for a GCNN with Edgeworth's Modification = 279 Comparing the Models = 282 Multivariate Versions of the GCNN = 289 8. Dimension Reduction and Orthogonalization = 293 Principal Components = 295 Scaling and Computation Issues = 300 Code for Principal Components = 303 Principal Components of Group Centroids = 316 Discriminant Functions = 319 9. Assessing Generalization Ability = 335 Bias and Variance in Statistical Estimators = 337 Notation = 338 What Good Are They? = 340 Bias and Variance of the Sample Mean = 341 The Jackknife and the Bootstrap = 343 The Jackknife = 343 Code for the Jackknife = 349 The Bootstrap = 351 Code for the Bootstrap = 355 Final Comments on the Jackknife and the Bootstrap = 356 Economical Error Estimation = 359 Population Error, Apparent Error, and Excess Error = 360 Overview of Efficient Error Estimation = 364 Cross Validation = 365 Code for Cross Validation = 367 The Bootstrap Estimate of Excess Error = 369 Code for the Bootstrap Method = 373 Code for the Eo Estimator = 374 Efron's E0 Estimator = 374 The E632 Estimator = 376 10. Using the PNN Program = 379 Output Mode = 381 Network Model = 382 Kernel Functions = 383 Buliding the Training Set = 384 Learning = 386 Confusion Matrices = 387 Testing in AUTOASSOCIATION and MAPPING Modes = 389 Saving Weights and Execution Results = 389 Alphabetical Glossary of Commands = 390 Verification of Program Operation = 394 Appendix = 403 Disk Contents = 403 Hardware and Software Requirements = 405 Making a Backup Copy = 405 Installing the Disk = 405 Bibliography = 407 Index = 427
