HOME > 상세정보

상세정보

Natural language processing with PyTorch : build intelligent language applications using deep learning

Natural language processing with PyTorch : build intelligent language applications using deep learning (10회 대출)

자료유형
단행본
개인저자
Rao, Delip. McMahan, Brian.
서명 / 저자사항
Natural language processing with PyTorch : build intelligent language applications using deep learning / Delip Rao and Brian McMahan.
발행사항
Sebastopol :   O'Reilly Media,   c2019.  
형태사항
xiii, 238 p. : ill. ; 24 cm.
ISBN
9781491978238 (pbk.)
서지주기
Includes bibliographical references and index.
일반주제명
Machine learning. Natural language processing (Computer science).
000 00000nam u2200205 a 4500
001 000045978017
005 20190401143659
008 190329s2019 caua b 001 0 eng d
020 ▼a 9781491978238 (pbk.)
040 ▼a 211009 ▼c 211009 ▼d 211009
082 0 4 ▼a 006.31 ▼2 23
084 ▼a 006.31 ▼2 DDCK
090 ▼a 006.31 ▼b R215n
100 1 ▼a Rao, Delip.
245 1 0 ▼a Natural language processing with PyTorch : ▼b build intelligent language applications using deep learning / ▼c Delip Rao and Brian McMahan.
260 ▼a Sebastopol : ▼b O'Reilly Media, ▼c c2019.
300 ▼a xiii, 238 p. : ▼b ill. ; ▼c 24 cm.
504 ▼a Includes bibliographical references and index.
650 0 ▼a Machine learning.
650 0 ▼a Natural language processing (Computer science).
700 1 ▼a McMahan, Brian.
945 ▼a KLPA

소장정보

No. 소장처 청구기호 등록번호 도서상태 반납예정일 예약 서비스
No. 1 소장처 과학도서관/Sci-Info(2층서고)/ 청구기호 006.31 R215n 등록번호 121248413 (10회 대출) 도서상태 대출가능 반납예정일 예약 서비스 B M

컨텐츠정보

책소개

Natural Language Processing (NLP) provides boundless opportunities for solving problems in artificial intelligence, making products such as Amazon Alexa and Google Translate possible. If you're a developer or data scientist new to NLP and deep learning, this practical guide shows you how to apply these methods using PyTorch, a Python-based deep learning library.

Authors Delip Rao and Brian McMahon provide you with a solid grounding in NLP and deep learning algorithms and demonstrate how to use PyTorch to build applications involving rich representations of text specific to the problems you face. Each chapter includes several code examples and illustrations.

  • Explore computational graphs and the supervised learning paradigm
  • Master the basics of the PyTorch optimized tensor manipulation library
  • Get an overview of traditional NLP concepts and methods
  • Learn the basic ideas involved in building neural networks
  • Use embeddings to represent words, sentences, documents, and other features
  • Explore sequence prediction and generate sequence-to-sequence models
  • Learn design patterns for building production NLP systems


정보제공 : Aladin

목차

Cover -- Copyright -- Table of Contents -- Preface -- Conventions Used in This Book -- Using Code Examples -- O’Reilly Safari -- How to Contact Us -- Acknowledments -- Chapter 1. Introduction -- The Supervised Learning Paradigm -- Observation and Target Encoding -- One-Hot Representation -- TF Representation -- TF-IDF Representation -- Target Encoding -- Computational Graphs -- PyTorch Basics -- Installing PyTorch -- Creating Tensors -- Tensor Types and Size -- Tensor Operations -- Indexing, Slicing, and Joining -- Tensors and Computational Graphs -- CUDA Tensors -- Exercises -- Solutions -- Summary -- References -- Chapter 2. A Quick Tour of Traditional NLP -- Corpora, Tokens, and Types -- Unigrams, Bigrams, Trigrams, …, N-grams -- Lemmas and Stems -- Categorizing Sentences and Documents -- Categorizing Words: POS Tagging -- Categorizing Spans: Chunking and Named Entity Recognition -- Structure of Sentences -- Word Senses and Semantics -- Summary -- References -- Chapter 3. Foundational Components of Neural Networks -- The Perceptron: The Simplest Neural Network -- Activation Functions -- Sigmoid -- Tanh -- ReLU -- Softmax -- Loss Functions -- Mean Squared Error Loss -- Categorical Cross-Entropy Loss -- Binary Cross-Entropy Loss -- Diving Deep into Supervised Training -- Constructing Toy Data -- Putting It Together: Gradient-Based Supervised Learning -- Auxiliary Training Concepts -- Correctly Measuring Model Performance: Evaluation Metrics -- Correctly Measuring Model Performance: Splitting the Dataset -- Knowing When to Stop Training -- Finding the Right Hyperparameters -- Regularization -- Example: Classifying Sentiment of Restaurant Reviews -- The Yelp Review Dataset -- Understanding PyTorch’s Dataset Representation -- The Vocabulary, the Vectorizer, and the DataLoader -- A Perceptron Classifier -- The Training Routine -- Evaluation, Inference, and Inspection -- Summary -- References -- Chapter 4. Feed-Forward Networks for Natural Language Processing -- The Multilayer Perceptron -- A Simple Example: XOR -- Implementing MLPs in PyTorch -- Example: Surname Classification with an MLP -- The Surnames Dataset -- Vocabulary, Vectorizer, and DataLoader -- The SurnameClassifier Model -- The Training Routine -- Model Evaluation and Prediction -- Regularizing MLPs: Weight Regularization and Structural Regularization (or Dropout) -- Convolutional Neural Networks -- CNN Hyperparameters -- Implementing CNNs in PyTorch -- Example: Classifying Surnames by Using a CNN -- The SurnameDataset Class -- Vocabulary, Vectorizer, and DataLoader -- Reimplementing the SurnameClassifier with Convolutional Networks -- The Training Routine -- Model Evaluation and Prediction -- Miscellaneous Topics in CNNs -- Pooling -- Batch Normalization (BatchNorm) -- Network-in-Network Connections (1x1 Convolutions) -- Residual Connections/Residual Block -- Summary -- References -- Chapter 5. Embedding Words and Types -- Why Learn Embeddings? -- Efficiency of Embeddings -- Approaches to Learning Word Embeddings -- The Practical Use of Pretrained Word Embeddings -- Example: Learning the Continuous Bag of Words Embeddings -- The Frankenstein Dataset -- Vocabulary, Vectorizer, and DataLoader -- The CBOWClassifier Model -- The Training Routine -- Model Evaluation and Prediction -- Example: Transfer Learning Using Pretrained Embeddings for Document Classification -- The AG News Dataset -- Vocabulary, Vectorizer, and DataLoader -- The NewsClassifier Model -- The Training Routine -- Model Evaluation and Prediction -- -- Summary -- References -- Chapter 6. Sequence Modeling for Natural Language Processing -- Introduction to Recurrent Neural Networks -- Implementing an Elman RNN -- Example: Classifying Surname Nationality Using a Character RNN -- The SurnameDataset Class -- The Vectorization Data Structures -- The SurnameClassifier Model -- The Training Routine and Results -- Summary -- References -- Chapter 7. Intermediate Sequence Modeling for Natural Language Processing -- The Problem with Vanilla RNNs (or Elman RNNs) -- Gating as a Solution to a Vanilla RNN’s Challenges -- Example: A Character RNN for Generating Surnames -- The SurnameDataset Class -- The Vectorization Data Structures -- From the ElmanRNN to the GRU -- Model 1: The Unconditioned SurnameGenerationModel -- Model 2: The Conditioned SurnameGenerationModel -- The Training Routine and Results -- Tips and Tricks for Training Sequence Models -- References -- Chapter 8. Advanced Sequence Modeling for Natural Language Processing -- Sequence-to-Sequence Models, Encoder–Decoder Models, and Conditioned Generation -- Capturing More from a Sequence: Bidirectional Recurrent Models -- Capturing More from a Sequence: Attention -- Attention in Deep Neural Networks -- Evaluating Sequence Generation Models -- Example: Neural Machine Translation -- The Machine Translation Dataset -- A Vectorization Pipeline for NMT -- Encoding and Decoding in the NMT Model -- The Training Routine and Results -- Summary -- References -- Chapter 9. Classics, Frontiers, and Next Steps -- What Have We Learned so Far? -- Timeless Topics in NLP -- Dialogue and Interactive Systems -- Discourse -- Information Extraction and Text Mining -- Document Analysis and Retrieval -- Frontiers in NLP -- Design Patterns for Production NLP Systems -- Where Next? -- References -- Index -- About the Authors -- Colophon -- .

관련분야 신착자료

Negro, Alessandro (2026)
Dyer-Witheford, Nick (2026)