HOME > 상세정보

상세정보

Fundamentals of deep learning : designing next-generation machine intelligence algorithms / First edition

Fundamentals of deep learning : designing next-generation machine intelligence algorithms / First edition (17회 대출)

자료유형
단행본
개인저자
Buduma, Nikhil. Locascio, Nicholas.
서명 / 저자사항
Fundamentals of deep learning : designing next-generation machine intelligence algorithms / Nikhil Buduma ; with contributions by Nicholas Locascio.
판사항
First edition.
발행사항
Sebastopol, CA :   O'Reilly Media,   c2017.  
형태사항
xii, 283 p. : ill. (some col.) ; 24 cm.
ISBN
9781491925614 (paperback) 1491925612 (paperback)
내용주기
The neural network -- Training feed-forward neural networks -- Implementing neural networks in TensorFlow -- Beyond gradient descent -- Convolutional neural networks -- Embedding and representation learning -- Models for sequence analysis -- Memory augmented neural networks -- Deep reinforcement learning.
서지주기
Includes bibliographical references and index.
일반주제명
Artificial intelligence. Machine learning. Neural networks (Computer science).
000 00000cam u2200205 a 4500
001 000045936620
005 20180403113351
008 180327s2017 caua b 001 0 eng d
010 ▼a 2017448783
015 ▼a GBB704534 ▼2 bnb
020 ▼a 9781491925614 (paperback)
020 ▼a 1491925612 (paperback)
035 ▼a (KERIS)REF000018572834
040 ▼a IG$ ▼b eng ▼c IG$ ▼e rda ▼d OCLCO ▼d TEF ▼d BDX ▼d BTCTA ▼d YDXCP ▼d OCLCQ ▼d OCLCF ▼d FM0 ▼d CHVBK ▼d OCLCO ▼d OQX ▼d SHS ▼d U3G ▼d OCLCA ▼d DLC ▼d 211009
050 0 0 ▼a TA347.A78 ▼b B83 2017
082 0 4 ▼a 006.3/1 ▼2 23
084 ▼a 006.31 ▼2 DDCK
090 ▼a 006.31 ▼b B927f
100 1 ▼a Buduma, Nikhil. ▼0 AUTH(211009)90257
245 1 0 ▼a Fundamentals of deep learning : ▼b designing next-generation machine intelligence algorithms / ▼c Nikhil Buduma ; with contributions by Nicholas Locascio.
246 3 0 ▼a Designing next-generation machine intelligence algorithms
250 ▼a First edition.
260 ▼a Sebastopol, CA : ▼b O'Reilly Media, ▼c c2017.
300 ▼a xii, 283 p. : ▼b ill. (some col.) ; ▼c 24 cm.
504 ▼a Includes bibliographical references and index.
505 0 ▼a The neural network -- Training feed-forward neural networks -- Implementing neural networks in TensorFlow -- Beyond gradient descent -- Convolutional neural networks -- Embedding and representation learning -- Models for sequence analysis -- Memory augmented neural networks -- Deep reinforcement learning.
650 0 ▼a Artificial intelligence.
650 0 ▼a Machine learning.
650 0 ▼a Neural networks (Computer science).
700 1 ▼a Locascio, Nicholas.
945 ▼a KLPA

No. 소장처 청구기호 등록번호 도서상태 반납예정일 예약 서비스
No. 1 소장처 중앙도서관/서고6층/ 청구기호 006.31 B927f 등록번호 111788556 (2회 대출) 도서상태 대출가능 반납예정일 예약 서비스 B M
No. 2 소장처 과학도서관/Sci-Info(2층서고)/ 청구기호 006.31 B927f 등록번호 121244069 (15회 대출) 도서상태 대출가능 반납예정일 예약 서비스 B M
No. 소장처 청구기호 등록번호 도서상태 반납예정일 예약 서비스
No. 1 소장처 중앙도서관/서고6층/ 청구기호 006.31 B927f 등록번호 111788556 (2회 대출) 도서상태 대출가능 반납예정일 예약 서비스 B M
No. 소장처 청구기호 등록번호 도서상태 반납예정일 예약 서비스
No. 1 소장처 과학도서관/Sci-Info(2층서고)/ 청구기호 006.31 B927f 등록번호 121244069 (15회 대출) 도서상태 대출가능 반납예정일 예약 서비스 B M

컨텐츠정보

책소개

With the reinvigoration of neural networks in the 2000s, deep learning has become an extremely active area of research, one that's paving the way for modern machine learning. In this practical book, author Nikhil Buduma provides examples and clear explanations to guide you through major concepts of this complicated field.

Companies such as Google, Microsoft, and Facebook are actively growing in-house deep-learning teams. For the rest of us, however, deep learning is still a pretty complex and difficult subject to grasp. If you're familiar with Python, and have a background in calculus, along with a basic understanding of machine learning, this book will get you started.

  • Examine the foundations of machine learning and neural networks
  • Learn how to train feed-forward neural networks
  • Use TensorFlow to implement your first neural network
  • Manage problems that arise as you begin to make networks deeper
  • Build neural networks that analyze complex images
  • Perform effective dimensionality reduction using autoencoders
  • Dive deep into sequence analysis to examine language
  • Understand the fundamentals of reinforcement learning


정보제공 : Aladin

저자소개

니킬 부두마(지은이)

샌프란시스코에 본사를 둔 Remedy의 공동 창립자이자 수석 과학자이다. Remedy는 데이터 기반의 1차 의료를 위한 새로운 시스템을 구축하고 있는 회사이다. 그는 16세 때 산호세 주립대학교에서 신약 개발 연구소를 운영하며, 자원이 부족한 지역 사회를 위한 저비용 스크리닝 방법론을 개발했다. 19세에는 국제 생물학 올림피아드에서 두 차례 금메달을 획득하였고, MIT에 진학하여 의료 서비스를 개선하기 위한 대규모 데이터 시스템을 개발하는 데 주력하였다. MIT에서는 전국적인 비영리 단체인 Lean On Me를 공동 설립하여 대학캠퍼스에서 익명의 문자 핫라인을 통해 효과적인 동료 지원을 제공하고, 데이터를 활용하여 전국적으로 긍정적인 정신 건강 및 웰니스 결과에 영향을 미치는 비영리 단체를 설립하였다. 현재니킬은 자신의 벤처 펀드인 Q Venture Partners를 통해 하드웨어 기술 및 데이터 회사에 투자하고 있으며, 밀워키 브루어스 야구팀의 데이터 분석팀을 관리하고 있다.

정보제공 : Aladin

목차

Section	Section Description	Page Number
Preface	p. ix
1	The Neural Network	p. 1
    Building Intelligent Machines	p. 1
    The Limits of Traditional Computer Programs	p. 2
    The Mechanics of Machine Learning	p. 3
    The Neuron	p. 7
    Expressing Linear Perceptrons as Neurons	p. 8
    Feed-Forward Neural Networks	p. 9
    Linear Neurons and Their Limitations	p. 12
    Sigmoid, Tanh, and ReLU Neurons	p. 13
    Softmax Output Layers	p. 15
    Looking Forward	p. 15
2	Training Feed-Forward Neural Networks	p. 17
    The Fast-Food Problem	p. 17
    Gradient Descent	p. 19
    The Delta Rule and Learning Rates	p. 21
    Gradient Descent with Sigmoidal Neurons	p. 22
    The Backpropagation Algorithm	p. 23
    Stochastic and Minibatch Gradient Descent	p. 25
    Test Sets, Validation Sets, and Overfitting	p. 27
    Preventing Overfitting in Deep Neural Networks	p. 34
    Summary	p. 37
3	Implementing Neural Networks in TensorFlow	p. 39
    What Is TensorFlow?	p. 39
    How Does TensorFlow Compare to Alternatives?	p. 40
    Installing TensorFlow	p. 41
    Creating and Manipulating TensorFlow Variables	p. 43
    TensorFlow Operations	p. 45
    Placeholder Tensors	p. 45
    Sessions in TensorFlow	p. 45
    Navigating Variable Scopes and Sharing Variables	p. 48
    Managing Models over the CPU and GPU	p. 51
    Specifying the Logistic Regression Model in TensorFlow	p. 52
    Logging and Training the Logistic Regression Model	p. 55
    Leveraging Tensor Board to Visualize Computation Graphs and Learning	p. 58
    Building a Multilayer Model for MNIST in TensorFlow	p. 59
    Summary	p. 62
4	Beyond Gradient Descent	p. 63
    The Challenges with Gradient Descent	p. 63
    Local Minima in the Error Surfaces of Deep Networks	p. 64
    Model Identifiability	p. 65
    How Pesky Are Spurious Local Minima in Deep Networks?	p. 66
    Flat Regions in the Error Surface	p. 69
    When the Gradient Points in the Wrong Direction	p. 71
    Momentum-Based Optimization	p. 74
    A Brief View of Second-Order Methods	p. 77
    Learning Rate Adaptation	p. 78
    AdaGrad-Accumulating Historical Gradients	p. 79
    RMSProp-Exponentially Weighted Moving Average of Gradients	p. 80
    Adam-Combining Momentum and RMSProp	p. 81
    The Philosophy Behind Optimizer Selection	p. 83
    Summary	p. 83
5	Convolutional Neural Networks	p. 85
    Neurons in Human Vision	p. 85
    The Shortcomings of Feature Selection	p. 86
    Vanilla Deep Neural Networks Don''t Scale	p. 89
    Filters and Feature Maps	p. 90
    Full Description of the Convolutional Layer	p. 95
    Max Pooling	p. 98
    Full Architectural Description of Convolution Networks	p. 99
    Closing the Loop on MNIST with Convolutional Networks	p. 101
    Image Preprocessing Pipelines Enable More Robust Models	p. 103
    Accelerating Training with Batch Normalization	p. 104
    Building a Convolutional Network for CIFAR-10	p. 107
    Visualizing Learning in Convolutional Networks	p. 109
    Leveraging Convolutional Filters to Replicate Artistic Styles	p. 113
    Learning Convolutional Filters for Other Problem Domains	p. 114
    Summary	p. 115
6	Embedding and Representation Learning	p. 117
    Learning Lower-Dimensional Representations	p. 117
    Principal Component Analysis	p. 118
    Motivating the Autoencoder Architecture	p. 120
    Implementing an Autoencoder in TensorFlow	p. 121
    Denoising to Force Robust Representations	p. 134
    Sparsity in Autoencoders	p. 137
    When Context Is More Informative than the Input Vector	p. 140
    The Word2Vec Framework	p. 143
    Implementing the Skip-Gram Architecture	p. 146
    Summary	p. 152
7	Models for Sequence Analysis	p. 153
    Analyzing Variable-Length Inputs	p. 153
    Tackling seq2seq with Neural N-Grams	p. 155
    Implementing a Part-of-Speech Tagger	p. 156
    Dependency Parsing and SyntaxNet	p. 164
    Beam Search and Global Normalization	p. 168
    A Case for Stateful Deep Learning Models	p. 172
    Recurrent Neural Networks	p. 173
    The Challenges with Vanishing Gradients	p. 176
    Long Short-Term Memory (LSTM) Units	p. 178
    TensorFlow Primitives for RNN Models	p. 183
    Implementing a Sentiment Analysis Model	p. 185
    Solving seq2seq Tasks with Recurrent Neural Networks	p. 189
    Augmenting Recurrent Networks with Attention	p. 191
    Dissecting a Neural Translation Network	p. 194
    Summary	p. 217
8	Memory Augmented Neural Networks	p. 219
    Neural Turing Machines	p. 219
    Attention-Based Memory Access	p. 221
    NTM Memory Addressing Mechanisms	p. 223
    Differentiable Neural Computers	p. 226
    Interference-Free Writing in DNCs	p. 229
    DNC Memory Reuse	p. 230
    Temporal Linking of DNC Writes	p. 231
    Understanding the DNC Read Head	p. 232
    The DNC Controller Network	p. 232
    Visualizing the DNC in Action	p. 234
    Implementing the DNC in TensorFlow	p. 237
    Teaching a DNC to Read and Comprehend	p. 242
    Summary	p. 244
9	Deep Reinforcement Learning	p. 245
    Deep Reinforcement Learning Masters Atari Games	p. 245
    What Is Reinforcement Learning?	p. 247
    Markov Decision Processes (MDP)	p. 248
        Policy	p. 249
        Future Return	p. 250
        Discounted Future Return	p. 251
    Explore Versus Exploit	p. 251
    Policy Versus Value Learning	p. 253
        Policy Learning via Policy Gradients	p. 254
    Pole-Cart with Policy Gradients	p. 254
        OpenAI Gym	p. 254
        Creating an Agent	p. 255
        Building the Model and Optimizer	p. 257
        Sampling Actions	p. 257
        Keeping Track of History	p. 257
        Policy Gradient Main Function	p. 258
        PGAgent Performance on Pole-Cart	p. 260
    Q-Learning and Deep Q-Networks	p. 261
        The Bellman Equation	p. 261
        Issues with Value Iteration	p. 262
        Approximating the Q-Function	p. 262
        Deep Q-Network (DQN)	p. 263
        Training DQN	p. 263
        Learning Stability	p. 263
        Target Q-Network	p. 264
        Experience Replay	p. 264
        From Q-Function to Policy	p. 264
        DQN and the Markov Assumption	p. 265
        DQN''s Solution to the Markov Assumption	p. 265
        Playing Breakout wth DQN	p. 265
        Building Our Architecture	p. 268
        Stacking Frames	p. 268
        Setting Lip Training Operations	p. 268
        Updating Our Target Q-Network	p. 269
        Implementing Experience Replay	p. 269
        DQN Main Loop	p. 270
        DQNAgent Results on Breakout	p. 272
    Improving and Moving Beyond DQN	p. 273
        Deep Recurrent Q-Networks (DRQN)	p. 273
        Asynchronous Advantage Actor-Critic Agent (A3C)	p. 274
        Unsupervised REinforcement and Auxiliary Learning (UNREAL)	p. 275
    Summary	p. 276
Index	p. 277

관련분야 신착자료

Dyer-Witheford, Nick (2026)
양성봉 (2025)