HOME > 상세정보

상세정보

Analogue neural VLSI : a pulse stream approach

Analogue neural VLSI : a pulse stream approach (2회 대출)

자료유형
단행본
개인저자
Murray, Alan F. Tarassenko, Lionel.
서명 / 저자사항
Analogue neural VLSI : a pulse stream approach / Alan Murray and Lionel Tarassenko.
발행사항
London ;   New York, NY :   Chapman & Hall ,   c1994.  
형태사항
xiii, 147 p. : ill. ; 24 cm.
총서사항
Chapman & Hall neural computing series ; 2.
ISBN
0412450607 (alk. paper)
서지주기
Includes bibliographical references (p. 136-141) and index.
일반주제명
Integrated circuits --Very large scale integration. Neural networks (Computer science).
000 00947camuuu200265 a 4500
001 000000023907
005 19980603140140.0
008 930819s1994 enka b 001 0 eng
010 ▼a 93032970
020 ▼a 0412450607 (alk. paper)
040 ▼a DLC ▼c DLC ▼d DLC
049 1 ▼l 111025131 ▼l 121001795 ▼f 과학 ▼l 121002994 ▼f 과학
050 0 0 ▼a QA76.87 ▼b .M87 1994
082 0 0 ▼a 006.3 ▼2 20
090 ▼a 006.3 ▼b M981a
100 1 ▼a Murray, Alan F.
245 1 0 ▼a Analogue neural VLSI : ▼b a pulse stream approach / ▼c Alan Murray and Lionel Tarassenko.
260 ▼a London ; ▼a New York, NY : ▼b Chapman & Hall , ▼c c1994.
300 ▼a xiii, 147 p. : ▼b ill. ; ▼c 24 cm.
440 0 ▼a Chapman & Hall neural computing series ; ▼v 2.
504 ▼a Includes bibliographical references (p. 136-141) and index.
650 0 ▼a Integrated circuits ▼x Very large scale integration.
650 0 ▼a Neural networks (Computer science).
700 1 0 ▼a Tarassenko, Lionel.

No. 소장처 청구기호 등록번호 도서상태 반납예정일 예약 서비스
No. 1 소장처 중앙도서관/서고6층/ 청구기호 006.3 M981a 등록번호 111025131 도서상태 대출가능 반납예정일 예약 서비스 B M
No. 2 소장처 과학도서관/Sci-Info(2층서고)/ 청구기호 006.3 M981a 등록번호 121001795 (2회 대출) 도서상태 대출가능 반납예정일 예약 서비스 B M
No. 3 소장처 과학도서관/Sci-Info(2층서고)/ 청구기호 006.3 M981a 등록번호 121002994 도서상태 대출가능 반납예정일 예약 서비스 B M
No. 소장처 청구기호 등록번호 도서상태 반납예정일 예약 서비스
No. 1 소장처 중앙도서관/서고6층/ 청구기호 006.3 M981a 등록번호 111025131 도서상태 대출가능 반납예정일 예약 서비스 B M
No. 소장처 청구기호 등록번호 도서상태 반납예정일 예약 서비스
No. 1 소장처 과학도서관/Sci-Info(2층서고)/ 청구기호 006.3 M981a 등록번호 121001795 (2회 대출) 도서상태 대출가능 반납예정일 예약 서비스 B M
No. 2 소장처 과학도서관/Sci-Info(2층서고)/ 청구기호 006.3 M981a 등록번호 121002994 도서상태 대출가능 반납예정일 예약 서비스 B M

컨텐츠정보

목차


CONTENTS
Preface = xi
1. Why build neural networks in analogue VLSI? = 1
 1.1 Introduction = 1
 1.2 Hopfield memories - the first generation of neural network VLSI = 1
 1.3 Pattern classification using neural networks = 4
  1.3.1 Single-layer networks = 5
  1.3.2 Multi-layer perceptrons = 8
  1.3.3 Conclusion = 10
 1.4 Why build neural networks in silicon? = 11
 1.5 Computational requirement = 13
  1.5.1 Digital or analogue? = 13
2. Neural VLSI - A review = 15
 2.1 Introduction = 15
 2.2 MOSFET equations - a crash course = 15
 2.3 Digital accelerators = 19
 2.4 Op-amps and resistors - a final look = 21
 2.5 Subthreshold circuits for neural networks = 22
 2.6 Analogue/digital combinations = 24
 2.7 MOS transconductance multiplier = 25
 2.8 MOSFET analogue multiplier = 26
 2.9 Imprecise low-area 'multiplier' = 27
 2.10 Analogue, programmable - Intel Electronically-Trainable Artificial Neural Network(ETANN) chip = 27
 2.11 Conclusion = 28
3 Analogue synaptic weight storage = 29
 3.1 Introduction = 29
 3.2 Dynamic weight storage = 29
 3.3 MNOS(Metal Nitride Oxide Silicon) networks = 30
 3.4 Floating-gate technology = 33
 3.5 Amorphous silicon(a-Si) synapses = 35
  3.5.1 Forming at highter temperatures = 36
  3.5.2 Deposition of metal during a-Si growth = 36
  3.5.3 Investigation of the forming process = 37
  3.5.4 Programming technology = 37
4 The pulse stream technique = 38
 4.1 Introduction = 38
 4.2 Pulse encoding of information = 39
  4.2.1 Pulse amplitude modulation = 41
  4.2.2 Pulse width modulatin = 41
  4.2.3 Pulse frequency modulation = 43
  4.2.4 Phase or delay modulation = 43
  4.2.5 Noise, robustness, accuracy and speed = 43
 4.3 Pulse stream arithmetic - addition and multiplication = 44
  4.3.1 Addition of pulse stream signals = 44
  4.3.2 Multiplication of pulse stream signals = 47
  4.3.3 Interfacing to addition = 49
 4.4 Pulse stream communication = 49
  4.4.1 Asynchronous intercommunication using pulse time information = 51
 4.5 Conclusions = 53
5 Pulse stream case studies = 54
 5.1 Overall introduction to case studies = 54
  5.1.1 Introduction - Edinburgh SADMANN/EPSILON work = 54
 5.2 The EPSILON (Edinburgh Pulse-Stream Implementation of a Learning-Oriented Network) chip = 55
 5.3 Process invariant summation and multiplication - the synapse = 55
  5.3.1 The transconductance multiplier = 56
  5.3.2 A synapse based on distributed feedback = 58
  5.3.3 The feedback operational amplifier = 61
  5.3.4 A voltage integrator = 61
  5.3.5 The complete system = 63
 5.4 Pulse frequency modulation neuron = 64
  5.4.1 A Pulse stream neuron with electrically adjustable gain = 66
 5.5 Pulse width modulation neuron = 67
 5.6 Swiched-capacitor design = 69
  5.6.1 Weight linearity = 70
  5.6.2 Weight storage time = 70
  5.6.3 Accuracy of computation = 71
 5.7 Per-pulse computation = 71
  5.7.1 Design overview = 72
  5.7.2 Input stage = 73
  5.7.3 Synapse = 73
  5.7.4 Summation neuron = 74
  5.7.5 Sigmoid function = 75
  5.7.6 Pulse regeneration = 75
  5.7.7 SPICE simulation = 75
  5.7.8 Results from test chips = 76
  5.7.9 Synapse linearity = 78
  5.7.10 Input sample and hold = 78
  5.7.11 Sigmoid transfer function = 81
  5.7.12 Output pulse stream generation = 82
  5.7.13 Weight precision = 83
  5.7.14 Weight update = 84
  5.7.15 Per-Pulse Computation - Summary = 84
 5.8 EPSILON - The chosen neuron/synapse cells, and results = 85
  5.8.1 The EPSILON design = 86
  5.8.2 Synapse = 87
  5.8.3 Neurons = 88
  5.8.4 EPSILON specification = 90
  5.8.5 Application - vowel classification = 91
 5.9 Conclusions = 92
6 Application examples = 94
 6.1 Introduction = 94
 6.2 Real-time speech recognition = 94
 6.3 Applications of neural VLSI = 96
 6.4 Applications of neural VLSI - dedicated systems = 96
  6.4.1 Path planning = 98
  6.4.2 Localization = 99
  6.4.3 Obstacle detection/avoidance = 101
  6.4.4 Conclusion = 102
 6.5 Applications of neural VLSI - hardware co-processors = 102
 6.6 Applications of neural VLSI - embedded neural systems = 103
 6.7 Conclusion = 103
7 The future = 104
 7.1 Introduction = 104
 7.2 Hardware learning with multi-layer perceptrons = 105
 7.3 The top-down approach : Virtual Targets = 106
  7.3.1 'Virtual Targets' Method - In an I:J:K MLP network = 107
  7.3.2 Experimental results = 108
  7.3.3 Implementation = 115
 7.4 The bottom-up approach: weight perturbation = 116
 7.5 Test problem = 117
 7.6 Weight perturbation for hardware learning = 119
 7.7 Back-propagation revisited(for the final time?) = 121
 7.8 Conclusion = 124
 7.9 Noisy synaptic arithmetic - an analysis = 125
  7.9.1 Mathematical predictions = 126
  7.9.2 Simulations = 127
  7.9.3 Prediction/verification = 129
  7.9.4 Generalization ability = 129
  7.9.5 Learning trajectory = 132
 7.10 Noise in training - some conclusions = 133
 7.11 On-chip learning - conclusion = 134
References = 136
Index = 142


관련분야 신착자료

Negro, Alessandro (2026)
Dyer-Witheford, Nick (2026)