HOME > 상세정보

상세정보

Foundations of neural networks

Foundations of neural networks (3회 대출)

자료유형
단행본
개인저자
Khanna, Tarun, 1968-
서명 / 저자사항
Foundations of neural networks / Tarun Khanna.
발행사항
Reading, Mass. :   Addison-Wesley,   c1990.  
형태사항
xii, 196 p. : ill. ; 24 cm.
총서사항
Addison-Wesley series in new horizons in technology.
ISBN
0201500361
서지주기
Includes bibliographical references (p. [185]-190).
일반주제명
Neural networks (Computer science). Neural computers.
000 00000cam u2200205 a 4500
001 000000089643
005 20250825131128
008 890710s1990 maua b 001 0 eng
010 ▼a 89017597
020 ▼a 0201500361
040 ▼a DLC ▼c DLC ▼d DLC ▼d 211009
049 1 ▼l 421112127 ▼f 과개
050 0 0 ▼a QA76.87 ▼b .K49 1990
082 0 0 ▼a 006.3 ▼2 20
090 ▼a 006.3 ▼b K45f
100 1 0 ▼a Khanna, Tarun, ▼d 1968- ▼0 AUTH(211009)19625.
245 1 0 ▼a Foundations of neural networks / ▼c Tarun Khanna.
260 0 ▼a Reading, Mass. : ▼b Addison-Wesley, ▼c c1990.
300 ▼a xii, 196 p. : ▼b ill. ; ▼c 24 cm.
440 0 ▼a Addison-Wesley series in new horizons in technology.
504 ▼a Includes bibliographical references (p. [185]-190).
650 0 ▼a Neural networks (Computer science).
650 0 ▼a Neural computers.

소장정보

No. 소장처 청구기호 등록번호 도서상태 반납예정일 예약 서비스
No. 1 소장처 과학도서관/Sci-Info(2층서고)/ 청구기호 006.3 K45f 등록번호 121163042 도서상태 대출가능 반납예정일 예약 서비스 B M
No. 2 소장처 과학도서관/Sci-Info(2층서고)/ 청구기호 006.3 K45f 등록번호 421112127 (3회 대출) 도서상태 대출가능 반납예정일 예약 서비스 B M

컨텐츠정보

책소개

A representative selection of recent literature on neural networks. Intended for the uninitiated with an interest in the sciences.


정보제공 : Aladin

저자소개

타룬 칸나(지은이)

하버드 경영대학원의 호르헤 파울로 레만 기금 교수이다. 기업가들, 기업들, 그리고 세계 신흥 시장 국가들의 비정부기구들과 함께 일하고 있는 저자는, 세계 경제 포럼에서 경제 분야의 젊은 리더로 선출되었으며, 인도와 중국 등 신성장 경제권에 관한 연구에 주력하고 있다.

정보제공 : Aladin

목차


CONTENTS
Preface = v
Chapter 1 : Introduction = 1
 1.1 Introduction = 2
 1.2 Major Aspects of PDP Models = 9
 1.3 What Follows = 14
 1.4 Literature Overview = 16
Chapter 2 : Associative Memory = 19
 2.1 Introduction = 20
 2.2 Representation of Information by Collective States = 21
 2.3 Basic Mathematical Model = 22
 2.4 Associative Recall = 24
  2.4.1 Mathematical Modeling of Associative Recall = 25
  2.4.2 The Novelty Component Approach = 28
  2.4.3 A Stochastic Approach to Recall in Large-Scale Operations = 29
 2.5 Memory as an Adaptive Filter = 31
 2.5.1 Biological Basis for Model = 32
 2.5.2 Mathematical Model = 34
 2.6 The Sutton-Barto Model = 35
 2.7 The Heterostat = 38
  2.7.1 Mathematical Model = 41
  2.7.2 Comparison of the Heterostat with Alternative Neuronal Models = 44
 2.8 Literature Overview = 46
Chapter 3 : The perceptron = 49
 3.1 Introduction = 50
 3.2 The Structure of a Percepton = 50
 3.3 The Parallel Nature of the Perceptron's Computation Process = 54
 3.4 Basic Mathematics of Decision Surfaces = 58
  3.4.1 The Linear Machine = 59
  3.4.2 Gradient Descent Techniques = 62
 3.5 The Perceptron Covergence Theorem = 64
 3.6 Scope of the Decision Surface Methodology = 69
 3.7 Comparison ofthe perceptron with Other Learning Models = 74
 3.8 Literture Overview = 82
Chapter 4 : The Delta Rule and Learning by Back Propagation = 85
 4.1 Introduction = 86
 4.2 The Delta Rule (Widrow-Hoff Rele) = 86
  4.2.1 Change of Basis = 87
  4.2.2 Gradient Descent in the Ordinary Delta Rule = 89
  4.2.3 Extension of the Delta Rule to Statistical Learning = 90
 4.3 The Generalized Delta Rule: Learning by Back-propagation = 92
 4.4 Applications of Learning by Back-propagation = 97
  4.4.1 Bond Rating = 97
 4.5 Intractability of network learning = 97
 4.6 Literature overview = 99
Chapter 5: Some Learning Paradigms = 101
 5.1 Introduction = 102
 5.2 The Competitve Learning Paradigm = 102
 5.2.1 The place of Competitve Learning Among other learning Paradigms = 104
 5.2.2  Architectural Framework = 104
 5.3 The Linsker Model: An Example of Competitve Learning = 107
  5.3.1 Mathematical Modeling = 107
 5.3.2 Principle Component Analysis = 111
 5.3.3 Shannon Information and the Infomax Principle  = 113
 5.4 The Fukushima Models : Another Example of Competitive Learning = 116
 5.4.1 Implications of a Modified Hebbian Rule = 116
 5.4.2 The Cognitron = 117
 5.4.3 The Neocognitron = 119
 5.5 The Interactive Activation Paradigm = 120
 5.6 Comparison of the Competitive Learning and Interactive Activation Paradigms = 125
 5.7 Adaptive Resonance Theory: A Stabilized Version of Competitve Learning = 126
 5.8 Comparison of Adaptive Resonance and Resonance and Learning-by-Back-    Propagation Paradigms = 131
 5.9 Literature Overview = 134
Chapter 6 : The Hopfield and Hoppensteadt models = 135
 6.1 Introduction = 136
 6.2 The Hopfield-Tank Modle = 137
  6.2.1 Biological Background = 137
  6.2.2 Electronic Implementation = 138
  6.2.3 Discrete versus Continuous-Valued Neural Elements = 140
  6.2.4 The Minimal Energy Concept: The Motivation Behind Network Dynamics = 143
  6.2.5 Application of Hopfield Nets-The Traveling Salesman Problem = 143
  6.2.6 Application of Hopfield Nets-Problems in Vision = 148
  6.2.7 Reduction of Oscillatory Phenomena in Hopfield Nets = 150
  6.2.8 Representation of Numbers in Neural Space = 152
  6.2.9 Computational and Programming Complexity = 154
  6.2.10 Optical Implementation of Neurel Networks = 156
 6.3 The Hoppensteadt Model = 158
  6.3.1 Introduction = 158
  6.3.2 A Few Words on Neuron Physiology = 161
  6.3.3 VCON: A Voltage-Controlled Oscillator Analog of a Neuron = 162
  6.3.4 Clocks and Phase Locking = 164
 6.4 Some General Comments on Relaxation Searches = 170
 6.5 The Boltzmann Machine Learning Algorithm = 174
 6.6 Literature Overview = 176
Glossary = 177
Bibliography = 185
Index = 191


관련분야 신착자료

Negro, Alessandro (2026)
Dyer-Witheford, Nick (2026)