HOME > 상세정보

상세정보

Natural language processing with TensorFlow : teach language to machines using Python's deep learning library

Natural language processing with TensorFlow : teach language to machines using Python's deep learning library (3회 대출)

자료유형
단행본
개인저자
Ganegedara, Thushan.
서명 / 저자사항
Natural language processing with TensorFlow : teach language to machines using Python's deep learning library / Thushan Ganegedara.
발행사항
Birmingham :   Packt Publishing Ltd,   c2018.  
형태사항
xviii, 446 p. : ill. ; 24 cm.
기타형태 저록
Online version:   Ganegedara, Thushan.   Natural language processing with TensorFlow : teach language to machines using Python's deep learning library   9781788477758   (211009) 000045985744  
ISBN
9781788478311
일반주기
Online version: Ganegedara, Thushan. Natural language processing with TensorFlow : teach language to machines using Python's deep learning library 9781788477758
서지주기
Includes bibliographical references and index.
일반주제명
Natural language processing (Computer science). Machine learning. Python (Computer program language).
000 00000nam u2200205 a 4500
001 000045983660
005 20190611160924
008 190515s2018 enka b 001 0 eng d
020 ▼a 9781788478311
040 ▼a 211009 ▼c 211009 ▼d 211009
082 0 4 ▼a 006.35 ▼2 23
084 ▼a 006.35 ▼2 DDCK
090 ▼a 006.35 ▼b G196n
100 1 ▼a Ganegedara, Thushan.
245 1 0 ▼a Natural language processing with TensorFlow : ▼b teach language to machines using Python's deep learning library / ▼c Thushan Ganegedara.
260 ▼a Birmingham : ▼b Packt Publishing Ltd, ▼c c2018.
300 ▼a xviii, 446 p. : ▼b ill. ; ▼c 24 cm.
504 ▼a Includes bibliographical references and index.
650 0 ▼a Natural language processing (Computer science).
650 0 ▼a Machine learning.
650 0 ▼a Python (Computer program language).
776 0 8 ▼i Online version: ▼a Ganegedara, Thushan. ▼t Natural language processing with TensorFlow : teach language to machines using Python's deep learning library ▼z 9781788477758 ▼w (211009) 000045985744
945 ▼a KLPA

소장정보

No. 소장처 청구기호 등록번호 도서상태 반납예정일 예약 서비스
No. 1 소장처 중앙도서관/서고6층/ 청구기호 006.35 G196n 등록번호 111809469 (3회 대출) 도서상태 대출가능 반납예정일 예약 서비스 B M

컨텐츠정보

책소개

Write modern natural language processing applications using deep learning algorithms and TensorFlow


Key Features:

  • Focuses on more efficient natural language processing using TensorFlow
  • Covers NLP as a field in its own right to improve understanding for choosing TensorFlow tools and other deep learning approaches
  • Provides choices for how to process and evaluate large unstructured text datasets
  • Learn to apply the TensorFlow toolbox to specific tasks in the most interesting field in artificial intelligence


Book Description:

Natural language processing (NLP) supplies the majority of data available to deep learning applications, while TensorFlow is the most important deep learning framework currently available. Natural Language Processing with TensorFlow brings TensorFlow and NLP together to give you invaluable tools to work with the immense volume of unstructured data in today's data streams, and apply these tools to specific NLP tasks.


Thushan Ganegedara starts by giving you a grounding in NLP and TensorFlow basics. You'll then learn how to use Word2vec, including advanced extensions, to create word embeddings that turn sequences of words into vectors accessible to deep learning algorithms. Chapters on classical deep learning algorithms, like convolutional neural networks (CNN) and recurrent neural networks (RNN), demonstrate important NLP tasks as sentence classification and language generation. You will learn how to apply high-performance RNN models, like long short-term memory (LSTM) cells, to NLP tasks. You will also explore neural machine translation and implement a neural machine translator.


After reading this book, you will gain an understanding of NLP and you'll have the skills to apply TensorFlow in deep learning NLP applications, and how to perform specific NLP tasks.


What You Will Learn:

  • Core concepts of NLP and various approaches to natural language processing
  • How to solve NLP tasks by applying TensorFlow functions to create neural networks
  • Strategies to process large amounts of data into word representations that can be used by deep learning applications
  • Techniques for performing sentence classification and language generation using CNNs and RNNs
  • About employing state-of-the art advanced RNNs, like long short-term memory, to solve complex text generation tasks
  • How to write automatic translation programs and implement an actual neural machine translator from scratch
  • The trends and innovations that are paving the future in NLP


Who this book is for:

This book is for Python developers with a strong interest in deep learning, who want to learn how to leverage TensorFlow to simplify NLP tasks. Fundamental Python skills are assumed, as well as some knowledge of machine learning and undergraduate-level calculus and linear algebra. No previous natural language processing experience required, although some background in NLP or computational linguistics will be helpful.


정보제공 : Aladin

목차

CONTENTS
Preface = xi
Chapter 1 : Introduction to Natural Language Processing = 1
 What is Natural Language Processing? = 1
 Tasks of Natural Language Processing = 2
 The traditional approach to Natural Language Processing = 5
  Understanding the traditional approach = 5
  Drawbacks of the traditional approach = 10
 The deep learning approach to Natural Language Processing = 10
  History of deep learning = 11
  The current state of deep learning and NLP = 13
  Understanding a simple deep model - a Fully-Connected Neural Network = 14
 The roadmap - beyond this chapter = 16
 Introduction to the technical tools = 21
  Description of the tools = 21
  Installing Python and scikit-learn = 22
  Installing Jupyter Notebook = 22
  Installing TensorFiow = 23
 Summary = 24
Chapter 2 : Understanding TensorFiow = 27
 What is TensorFiow? = 28
  Getting started with TensorFiow = 28
  TensorFiow client in detail = 31
  TensorFiow architecture - what happens when you execute the client? = 32
  Cafe Le TensorFiow- understanding TensorFiow with an analogy = 35
 Inputs, variables, outputs, and operations 36
  Defining inputs in TensorFiow = 37
  Defining variables in TensorFiow = 43
  Defining TensorFiow oulputs = 45
  Defining TensorFiow operations = 45
 Reusing variables with scoping = 57
 Implementing our first neural network = 59
  Preparing the data = 60
  Defining the TensorFiow graph = 61
  Running the neural network = 63
 Summary = 65
Chapter 3 : Word2vec - Learning Word Embeddings = 67
 What is a word representation or meaning? = 69
 Classical approaches to learning word representation = 69
  Word Net - using an external lexical knowledge base for learning word representations = 70
  One-hot encoded representation = 74
  The TF-IDF method = 75
  Co-occurrence matrix = 76
 Word2vec- a neural network-based approach to learning word representation = 77
  Exercise : is queen = king - he + she? = 78
  Designing a loss function for learning word embeddings = 82
 The skip-gram algorithm = 83
  From raw text to structured data = 83
  Learning the word embeddings with a neural network = 84
  Implementing skip-gram with TensorFiow = 95
 The Continuous Bag-of-Words algorithm = 98
  Implementing CBOW in TensorFiow = 99
 Summary = 100
Chapter 4 : Advanced Word2vec = 103
 The original skip-gram algorithm = 104
  Implementing the original skip-gram algorithm = 105
  Comparing the original skip-gram with the improved skip-gram = 107
 Comparing skip-gram with CBOW = 107
  Performance comparison = 108
  Which is the wirmer, skip-gram or CBOW? = 112
 Extensions to the word embeddings algorithms = 114
  Using the unigram distribution for negative sampling = 114
  Implementing unigram-based negative sampling = 115
  Subsampling - probabilistically ignoring the common words = 117
  Implementing subsampling = 118
  Comparing the CBOW and its extensions = 118
 More recent algorithms extending skip-gram and CBOW = 119
  A limitation of the skip-gram algorithm = 119
  The structured skip-gram algorithm = 120
  The loss function = 120
  The continuous window model = 122
 GloVe - Global Vectors representation = 123
  Understanding GloVe = 123
  Implementing GloVe = 125
 Document classification with Word2vec = 126
  Dataset = 127
  Classifying documents with word embeddings = 127
  Implementation -learning word embeddings = 128
  Implementation- word embeddings to document embeddings = 129
  Document clustering and t-SNE visualization of embedded documents = 130
  Inspecting several outliers = 131
  Implementation -clustering/classification of documents with K-means = 132
 Summary = 134
Chapter 5 : Sentence Classification with Convolutional Neural Networks = 135
 Introducing Convolution Neural Networks = 136
  CNN fundamentals = 136
  The power of Convolution Neural Networks = 139
 Understanding Convolution Neural Networks = 139
  Convolution operation = 140
  Pooling operation = 144
  Fully connected layers = 147
  Putting everything together = 147
 Exercise - image classification on MNIST with CNN = 148
  About the data = 149
  Implementing the CNN = 149
  Analyzing the predictions produced with a CNN = 152
 Using CNNs for sentence classification = 153
  CNN structure = 153
  Pooling over time = 157
  Implementation - sentence classification with CNNs = 159
 Summary = 162
Chapter 6 : Recurrent Neural Networks = 163
 Understanding Recurrent Neural Networks = 164
  The problem with feed-forward neural networks = 165
  Modeling with Recurrent Neural Networks = 166
  Technical description of a Recurrent Neural Network = 168
 Backpropagation Through Time = 170
  How backpropagation works = 170
  Why we cannot use BP directly for RNNs = 171
  Backpropagation Through Time - training RNNs = 172
  Truncated BPTT- training RNNs efficiently = 173
  Limitations of BPTT- vanishing and exploding gradients = 173
 Applications of RNNs = 175
  One-to-one RNNs = 176
  One-to-many RNNs = 176
  Many-to-one RNNs = 177
  Many-to-many RNNs = 178
 Generating text with RNNs = 179
  Defining hyperparameters = 179
  Unrolling the inputs over time for Truncated BPTT = 180
  Defining the validation dataset = 181
  Defining weights and biases = 181
  Defining state persisting variables = 181
  Calculating the hidden states and outputs with unrolled inputs = 182
  Calculating the loss = 183
  Resetting state at the beginning of a new segment of text = 183
  Calculating validation output = 184
  Calculating gradients and optimizing = 184
  Outputting a freshly generated chunk of text = 184
 Evaluating text results output from the RNN = 185
 Perplexity- measuring the quality of the text result = 187
 Recurrent Neural Networks with Context Features - RNNs with longer memory = 188
  Technical description of the RNN-CF = 188
  Implementing the RNN-CF = 190
 Summary = 199
Chapter 7 : Long Short-Term Memory Networks = 201
 Understanding Long Short-Term Memory Networks = 202
  What is an LSTM? = 203
  LSTMs in more detail = 204
  How LSTMs differ from standard RNNs = 212
 How LSTMs solve the vanishing gradient problem = 213
  Improving LSTMs = 216
  Greedy sampling = 217
  Beam search = 218
  Using word vectors = 219
  Bidirectional LSTMs (BiLSTM) = 220
 Other variants of LSTMs = 222
  Peephole connections = 223
  Gated Recurrent Units = 224
 Summary = 226
Chapter 8 : Applications of LSTM - Generating Text = 229
 Our data = 230
  About the dataset = 230
  Preprocessing data = 232
 Implementing an LSTM = 232
  Defining hyperparameters = 232
  Defining parameters = 233
  Defining an LSTM cell and its operations = 235
  Defining inputs and labels = 236
  Defining the optimizer = 238
  Decaying learning rate over time = 238
  Making predictions = 240
  Calculating perplexity (loss) = 240
  Resetting states = 240
  Greedy sampling to break unimodality = 241
  Generating new text = 241
  Example generated text 242
 Comparing LSTMs to LSTMs with peephole connections and GRUs = 243
  Standard LSTM = 243
  Gated Recurrent Units (GRUs) = 245
  LSTMs with peepholes = 248
  Training and validation perplexities over time = 250
 Improving LSTMs - beam search = 251
  Implementing beam search = 252
  Examples generated with beam search = 254
 Improving LSTMs -generating text with words instead of n-grams = 255
  The curse of dimensionality = 255
  Word2vec to the rescue = 255
  Generating text with Word2vec = 256
  Examples generated with LSTM-Word2vec and beam search = 258
  Perplexity over time = 259
 Using the TensorFiow RNN API = 260
 Summary = 264
Chapter 9 : Applications of LSTM - Image Caption Generation = 265
 Getting to know the data = 266
  ILSVRC lmageNet dataset = 267
  The MS-COCO dataset = 268
 The machine learning pipeline for image caption generation = 269
 Extracting image features with CNNs = 273
 Implementation - loading weights and inferencing with VGG-16 = 274
  Building and updating variables = 274
  Preprocessing inputs = 275
  Inferring VGG-16 = 277
  Extracting vectorized representations of images = 278
  Predicting class probabilities with VGG-16 = 278
 Learning word embeddings = 280
 Preparing captions for feeding into LSTMs = 281
 Generating data for LSTMs = 282
 Defining the LSTM = 284
 Evaluating the results quantitatively = 287
  BLEU = 287
  ROUGE = 288
  METEOR = 289
  CIDEr = 291
  BLEU-4 over time for our model = 292
 Captions generated for test images = 293
 Using TensorFiow RNN API with pretrained GloVe word vectors = 297
  Loading GloVe word vectors = 298
  Cleaning data = 299
  Using pretrained embeddings with TensorFiow RNN API = 302
 Summary = 308
Chapter 10 : Sequence-to-Sequence Learning - Neural Machine Translation = 311
 Machine translation = 312
 A brief historical tour of machine translation = 313
  Rule-based translation = 313
  Statistical Machine Translation (SMT) = 315
  Neural Machine Translation (NMT) = 317
 Understanding Neural Machine Translation = 320
  Intuition behind NMT = 320
  NMT architecture = 321
 Preparing data for the NMT system = 325
  At training time = 325
  Reversing the source sentence = 326
  At testing time = 327
 Training the NMT = 328
 Inference with NMT = 329
 The BLEU score - evaluating the machine translation systems = 330
  Modified precision = 331
  Brevity penalty = 331
  The final BLEU score = 332
 Implementing an NMT from scratch - a German to English translator = 332
  Introduction to data = 333
  Preprocessing data = 333
  Learning word embeddings = 335
  Defining the encoder and the decoder = 335
  Defining the end-to-end output calculation = 338
  Some translation results = 340
 Training an NMT jointly with word embeddings = 342
  Maximizing matchings between the dataset vocabulary and the pretrained embeddings = 343
  Defining the embeddings layer as a TensorFiow variable = 345
 Improving NMTs = 348
  Teacher forcing = 348
  Deep LSTMs = 350
 Attention = 351
  Breaking the context vector bottleneck = 351
  The attention mechanism in detail = 352
  Some translation results- NMT with attention = 359
  Visualizing attention for source and target sentences = 361
 Other applications of Seq2Seq models - chatbots = 363
  Training a chatbot = 364
  Evaluating chatbots - Turing test = 365
 Summary = 366
Chapter 11 : Current Trends and the Future of Natural Language Processing = 369
 Current trends in NLP = 370
  Word embeddings = 370
  Neural Machine Translation (NMT) = 376
 Penetration into other research fields = 378
  Combining NLP with computer vision = 378
  Reinforcement learning = 381
  Generative Adversarial Networks for NLP = 384
 Towards Artificial General Intelligence = 386
  One Model to Learn Them All = 386
  A joint many-task model - growing a neural network for multiple NLP tasks = 389
 NLP for social media = 391
  Detecting rumors in social media = 391
  Detecting emotions in social media = 391
  Analyzing political framing in tweets = 393
 New tasks emerging = 393
  Detecting sarcasm = 393
  Language grounding = 394
  Skimming text with LSTMs = 395
 Newer machine learning models = 395
  Phased LSTM = 396
  Dilated Recurrent Neural Networks (DRNNs) = 397
 Summary = 398
 References = 398
Appendix : Mathematical Foundations and Advanced TensorFiow = 403
 Basic data structures = 403
  Scalar = 403
  Vectors = 403
  Matrices = 404
  Indexing of a matrix = 405
 Special types of matrices = 406
  Identity matrix = 406
  Diagonal matrix = 407
  Tensors = 407
 Tensor/matrix operations = 407
  Transpose 407
  Multiplication 408
  Element-wise multiplication 409
  Inverse 409
  Finding the matrix inverse - SinJular Value Decomposition (SVD) 411
  Norms 412
  Determinant = 412
 Probability = 413
  Random variables = 413
  Discrete random variables = 413
  Continuous random variables = 414
  The probability mass/density function = 414
  Conditional probability = 417
  Joint probability = 417
  Marginal probability = 417
  Bayes'''' rule = 418
 Introduction to Keras = 418
 Introduction to the TensorFiow seq2seq library = 421
  Defining em beddings for the encoder and decoder = 421
  Defining the encoder = 421
  Defining the decoder = 422
 Visualizing word embeddings with TensorBoard = 424
  Starting TensorBoard = 424
  Saving word embeddings and visualizing via TensorBoard = 425
 Summary = 429
Other Books You May Enjoy = 431
Index = 437

관련분야 신착자료

Dyer-Witheford, Nick (2026)
양성봉 (2025)