| 000 | 00000cam u2200205 a 4500 | |
| 001 | 000045972698 | |
| 005 | 20190228165251 | |
| 008 | 190228s2018 sz a b 001 0 eng d | |
| 020 | ▼a 9783319944623 (hbk.) | |
| 035 | ▼a (KERIS)BIB000014992880 | |
| 040 | ▼a 221031 ▼c 221031 ▼d 211009 | |
| 082 | 0 4 | ▼a 006.32 ▼2 23 |
| 084 | ▼a 006.32 ▼2 DDCK | |
| 090 | ▼a 006.32 ▼b A266n | |
| 100 | 1 | ▼a Aggarwal, Charu C. |
| 245 | 1 0 | ▼a Neural networks and deep learning : ▼b a textbook / ▼c Charu C. Aggarwal. |
| 260 | ▼a Cham : ▼b Springer, ▼c c2018. | |
| 300 | ▼a xviii,497 p. : ▼b ill. (some col.) ; ▼c 26 cm. | |
| 504 | ▼a Includes bibliographical references and index. | |
| 945 | ▼a KLPA |
소장정보
| No. | 소장처 | 청구기호 | 등록번호 | 도서상태 | 반납예정일 | 예약 | 서비스 |
|---|---|---|---|---|---|---|---|
| No. 1 | 소장처 과학도서관/Sci-Info(2층서고)/ | 청구기호 006.32 A266n | 등록번호 121248075 (3회 대출) | 도서상태 대출가능 | 반납예정일 | 예약 | 서비스 |
| No. 2 | 소장처 과학도서관/Sci-Info(2층서고)/ | 청구기호 006.32 A266n | 등록번호 121248290 | 도서상태 대출가능 | 반납예정일 | 예약 | 서비스 |
컨텐츠정보
책소개
This book covers both classical and modern models in deep learning. The primary focus is on the theory and algorithms of deep learning. The theory and algorithms of neural networks are particularly important for understanding important concepts, so that one can understand the important design concepts of neural architectures in different applications. Why do neural networks work? When do they work better than off-the-shelf machine-learning models? When is depth useful? Why is training neural networks so hard? What are the pitfalls? The book is also rich in discussing different applications in order to give the practitioner a flavor of how neural architectures are designed for different types of problems. Applications associated with many different areas like recommender systems, machine translation, image captioning, image classification, reinforcement-learning based gaming, and text analytics are covered. The chapters of this book span three categories:
The basics of neural networks: Many traditional machine learning models can be understood as special cases of neural networks. An emphasis is placed in the first two chapters on understanding the relationship between traditional machine learning and neural networks. Support vector machines, linear/logistic regression, singular value decomposition, matrix factorization, and recommender systems are shown to be special cases of neural networks. These methods are studied together with recent feature engineering methods like word2vec.
Fundamentals of neural networks: A detailed discussion of training and regularization is provided in Chapters 3 and 4. Chapters 5 and 6 present radial-basis function (RBF) networks and restricted Boltzmann machines.
Advanced topics in neural networks: Chapters 7 and 8 discuss recurrent neural networks and convolutional neural networks. Several advanced topics like deep reinforcement learning, neural Turing machines, Kohonen self-organizing maps, and generative adversarial networks are introduced in Chapters 9 and 10.
The book is written for graduate students, researchers, and practitioners. Numerous exercises are available along with a solution manual to aid in classroom teaching. Where possible, an application-centric view is highlighted in order to provide an understanding of the practical uses of each class of techniques.
New feature
This book covers both classical and modern models in deep learning. The chapters of this book span three categories:
The basics of neural networks: Many traditional machine learning models can be understood as special cases of neural networks. An emphasis is placed in the first two chapters on understanding the relationship between traditional machine learning and neural networks. Support vector machines, linear/logistic regression, singular value decomposition, matrix factorization, and recommender systems are shown to be special cases of neural networks. These methods are studied together with recent feature engineering methods like word2vec.
Fundamentals of neural networks: A detailed discussion of training and regularization is provided in Chapters 3 and 4. Chapters 5 and 6 present radial-basis function (RBF) networks and restricted Boltzmann machines.
Advanced topics in neural networks: Chapters 7 and 8 discuss recurrent neural networks and convolutional neural networks. Several advanced topics like deep reinforcement learning, neural Turing machines, Kohonen self-organizing maps, and generative adversarial networks are introduced in Chapters 9 and 10.
The book is written for graduate students, researchers, and practitioners. Numerous exercises are available along with a solution manual to aid in classroom teaching. Where possible, an application-centric view is highlighted in order to provide an understanding of the practical uses of each class of techniques.
정보제공 :
저자소개
차루 C. 아가르왈(지은이)
뉴욕 요크타운 하이츠의 IBM T. J. 왓슨 리서치 센터의 뛰어난 연구 회원(DRSM)이다. 1993년 IIT Kanpur에서 학사 학위를 받았고 1996년 MIT에서 박사 학위를 받았다. 데이터 마이닝 분야에서 폭넓게 일해왔고, 400개 이상의 논문을 콘퍼런스와 학술지에 발표했으며 80개 이상의 저작 특허권이 있다. 데이터 마이닝에 관한 교과서, 특이치 분석에 관한 포괄적인 책을 포함한 15권의 책을 저술하거나 편집했다. 특허의 상업적 가치 덕분에 IBM에서 마스터 발명가로 세 번이나 지정됐다. 데이터 스트림에서 생물 테러리스트 위협 탐지에 대한 연구로 IBM 기업상(2003)을 수상했고, 프라이버시 기술에 대한 과학적인 공헌으로 IBM 우수 혁신상(2008)을 수상했다. 데이터 스트림 및 고차원적인 작업에 대한 각각의 작업을 인정받아 두 개의 IBM 우수 기술 성과상(2009, 2015)을 수상했다. 응축 기반 프라이버시 보존 데이터 마이닝에 관한 연구로 EDBT 2014 Test of Time Award를 수상했다. 또한 데이터 마이닝 분야에서 영향력 있는 연구 공헌에 대한 두 가지 최고상 중 하나인I EEE ICDM 연구 공헌상(2015)을 수상했다. IEEE 빅데이터 콘퍼런스(2014)의 총괄 공동 의장직과 ACM CIKM 콘퍼런스(2015), IEEE ICDM 콘퍼런스(2015), ACM KDD 콘퍼런스(2016) 프로그램 공동 의장직을 역임했다. 2004년부터 2008년까지 「IEEE Transactions on Knowledge and Data Engineering」의 부편집장으로 일했다. 「ACM Transactions on Knowledge Discovery from Data」의 부편집장, 「IEEE Transactions on Big Data」의 부편집장, 「Data Mining and Knowledge Discovery Journal」과 「ACM SIGKDD Exploration」의 편집장, 「Knowledge and Information Systems Journal」의 부편집장이다. Springer의 간행물인 「Lecture Notes on Social Networks」 자문 위원회에서 활동하고 있으며 데이터 마이닝에 관한 SIAM 활동 그룹의 부사장을 역임했다. "contributions to knowledge discovery and data mining algorithms"에 관한 SIAM, ACM, IEEE의 펠로우다.
목차
1 An Introduction to Neural Networks 2 Machine Learning with Shallow Neural Networks 3 Training Deep Neural Networks 4 Teaching Deep Learners to Generalize 5 Radical Basis Function Networks 6 Restricted Boltzmann Machines 7 Recurrent Neural Networks 8 Convolutional Neural Networks 9 Deep Reinforcement Learning 10 Advanced Topics in Deep Learning.
