| 000 | 00000cam u2200205 a 4500 | |
| 001 | 000045956960 | |
| 005 | 20181010114734 | |
| 008 | 181010s2014 nyua b 001 0 eng d | |
| 010 | ▼a 2014001779 | |
| 020 | ▼a 9781107057135 (hardback) | |
| 020 | ▼a 1107057132 (hardback) | |
| 035 | ▼a (KERIS)REF000017964916 | |
| 040 | ▼a DLC ▼b eng ▼c DLC ▼e rda ▼d DLC ▼d 211009 | |
| 050 | 0 0 | ▼a Q325.5 ▼b .S475 2014 |
| 082 | 0 0 | ▼a 006.3/1 ▼2 23 |
| 084 | ▼a 006.31 ▼2 DDCK | |
| 090 | ▼a 006.31 ▼b S528u | |
| 100 | 1 | ▼a Shalev-Shwartz, Shai. |
| 245 | 1 0 | ▼a Understanding machine learning : ▼b from theory to algorithms / ▼c Shai Shalev-Shwartz, The Hebrew University, Jerusalem, Shai Ben-David, University of Waterloo, Canada. |
| 260 | ▼a New York, NY, USA : ▼b Cambridge University Press, ▼c c2014. | |
| 300 | ▼a xvi, 397 p. : ▼b ill. ; ▼c 26 cm. | |
| 504 | ▼a Includes bibliographical references (p. 385-393) and index. | |
| 505 | 8 | ▼a Machine generated contents note: 1. Introduction; Part I. Foundations: 2. A gentle start; 3. A formal learning model; 4. Learning via uniform convergence; 5. The bias-complexity tradeoff; 6. The VC-dimension; 7. Non-uniform learnability; 8. The runtime of learning; Part II. From Theory to Algorithms: 9. Linear predictors; 10. Boosting; 11. Model selection and validation; 12. Convex learning problems; 13. Regularization and stability; 14. Stochastic gradient descent; 15. Support vector machines; 16. Kernel methods; 17. Multiclass, ranking, and complex prediction problems; 18. Decision trees; 19. Nearest neighbor; 20. Neural networks; Part III. Additional Learning Models: 21. Online learning; 22. Clustering; 23. Dimensionality reduction; 24. Generative models; 25. Feature selection and generation; Part IV. Advanced Theory: 26. Rademacher complexities; 27. Covering numbers; 28. Proof of the fundamental theorem of learning theory; 29. Multiclass learnability; 30. Compression bounds; 31. PAC-Bayes; Appendix A. Technical lemmas; Appendix B. Measure concentration; Appendix C. Linear algebra. |
| 520 | ▼a "Machine learning is one of the fastest growing areas of computer science, with far-reaching applications. The aim of this textbook is to introduce machine learning, and the algorithmic paradigms it offers, in a principled way. The book provides an extensive theoretical account of the fundamental ideas underlying machine learning and the mathematical derivations that transform these principles into practical algorithms. Following a presentation of the basics of the field, the book covers a wide array of central topics that have not been addressed by previous textbooks. These include a discussion of the computational complexity of learning and the concepts of convexity and stability; important algorithmic paradigms including stochastic gradient descent, neural networks, and structured output learning; and emerging theoretical concepts such as the PAC-Bayes approach and compression-based bounds. Designed for an advanced undergraduate or beginning graduate course, the text makes the fundamentals and algorithms of machine learning accessible to students and non-expert readers in statistics, computer science, mathematics, and engineering"-- ▼c Provided by publisher. | |
| 650 | 0 | ▼a Machine learning. |
| 650 | 0 | ▼a Algorithms. |
| 700 | 1 | ▼a Ben-David, Shai. |
| 945 | ▼a KLPA |
소장정보
| No. | 소장처 | 청구기호 | 등록번호 | 도서상태 | 반납예정일 | 예약 | 서비스 |
|---|---|---|---|---|---|---|---|
| No. 1 | 소장처 과학도서관/Sci-Info(2층서고)/ | 청구기호 006.31 S528u | 등록번호 121246195 (2회 대출) | 도서상태 대출가능 | 반납예정일 | 예약 | 서비스 |
컨텐츠정보
책소개
Introduces machine learning and its algorithmic paradigms, explaining the principles behind automated learning approaches and the considerations underlying their usage.
Machine learning is one of the fastest growing areas of computer science, with far-reaching applications. The aim of this textbook is to introduce machine learning, and the algorithmic paradigms it offers, in a principled way. The book provides a theoretical account of the fundamentals underlying machine learning and the mathematical derivations that transform these principles into practical algorithms. Following a presentation of the basics, the book covers a wide array of central topics unaddressed by previous textbooks. These include a discussion of the computational complexity of learning and the concepts of convexity and stability; important algorithmic paradigms including stochastic gradient descent, neural networks, and structured output learning; and emerging theoretical concepts such as the PAC-Bayes approach and compression-based bounds. Designed for advanced undergraduates or beginning graduates, the text makes the fundamentals and algorithms of machine learning accessible to students and non-expert readers in statistics, computer science, mathematics and engineering.
정보제공 :
목차
Introduction -- I. Foundations -- A gentle start -- A formal learning model -- Learning via uniform convergence -- The bias-complexity tradeoff -- The VC-dimension -- Nonuniform learnability -- The runtime of learning -- II. From Theory to Algorithms -- Linear predictors -- Boosting -- Model selection and validation -- Convex learning problems -- Regularization and stability -- Stochastic gradient descent -- Support vector machines -- Kernel methods -- Multiclass, ranking, and complex prediction problems -- Decision trees -- Nearest neighbor -- Neural networks -- III. Additional Learning Models -- Online learning -- Clustering -- Dimensionality reduction -- Generative models -- Feature selection and generation -- IV. Advanced Theory -- Rademacher complexities -- Covering numbers -- Proof of the fundamental theorem of learning theory -- Multiclass learnability -- Compression bounds -- PAC-Bayes.
