| 000 | 00000nam u2200205 a 4500 | |
| 001 | 000046185636 | |
| 005 | 20241002102212 | |
| 008 | 240930s2024 si ad b 000 0 eng d | |
| 020 | ▼a 9789819950676 ▼q (hbk.) | |
| 020 | ▼z 9789819950683 ▼q (ebook) | |
| 022 | ▼z 2510-1773 (electronic) | |
| 040 | ▼a 211009 ▼c 211009 ▼d 211009 | |
| 082 | 0 4 | ▼a 006.3/2 ▼2 23 |
| 084 | ▼a 006.32 ▼2 DDCK | |
| 090 | ▼a 006.32 ▼b N4947 | |
| 245 | 0 0 | ▼a Neural networks with model compression / ▼c Baochang Zhang, Tiancheng Wang, Sheng Xu, David Doermann. |
| 260 | ▼a Singapore : ▼b Springer, ▼c c2024. | |
| 300 | ▼a ix, 260 p. : ▼b ill. (some col.), charts ; ▼c 25 cm. | |
| 490 | 1 | ▼a Computational intelligence methods and applications, ▼x 2510-1765 |
| 504 | ▼a Includes bibliographical references. | |
| 505 | 0 | ▼a Chapter 1. Introduction -- Chapter 2. Binary Neural Networks -- Chapter 3. Binary Neural Architecture Search -- Chapter 4. Quantization of Neural Networks -- Chapter 5. Network Pruning -- Chapter 6. Applications. |
| 650 | 0 | ▼a Neural networks (Computer science) ▼x Mathematical models. |
| 650 | 0 | ▼a Data compression (Computer science) ▼x Mathematical models. |
| 650 | 0 | ▼a Geometric quantization. |
| 650 | 0 | ▼a Deep learning (Machine learning). |
| 700 | 1 | ▼a Zhang, Baochang, ▼c (Professor of artificial intelligence), ▼e author. |
| 700 | 1 | ▼a Wang, Tiancheng, ▼c (Artificial intelligence researcher), ▼e author. |
| 700 | 1 | ▼a Xu, Sheng, ▼c (Professor of automation science and electrical engineering), ▼e author. |
| 700 | 1 | ▼a Doermann, David S. ▼q (David Scott), ▼e author. |
| 830 | 0 | ▼a Computational intelligence methods and applications. |
| 945 | ▼a ITMT |
소장정보
| No. | 소장처 | 청구기호 | 등록번호 | 도서상태 | 반납예정일 | 예약 | 서비스 |
|---|---|---|---|---|---|---|---|
| No. 1 | 소장처 과학도서관/Sci-Info(2층서고)/ | 청구기호 006.32 N4947 | 등록번호 121267496 (1회 대출) | 도서상태 대출가능 | 반납예정일 | 예약 | 서비스 |
컨텐츠정보
책소개
Deep learning has achieved impressive results in image classification, computer vision and natural language processing. To achieve better performance, deeper and wider networks have been designed, which increase the demand for computational resources. The number of floating-point operations (FLOPs) has increased dramatically with larger networks, and this has become an obstacle for convolutional neural networks (CNNs) being developed for mobile and embedded devices. In this context, our book will focus on CNN compression and acceleration, which are important for the research community. We will describe numerous methods, including parameter quantization, network pruning, low-rank decomposition and knowledge distillation. More recently, to reduce the burden of handcrafted architecture design, neural architecture search (NAS) has been used to automatically build neural networks by searching over a vast architecture space. Our book will also introduce NAS due to its superiority and state-of-the-art performance in various applications, such as image classification and object detection. We also describe extensive applications of compressed deep models on image classification, speech recognition, object detection and tracking. These topics can help researchers better understand the usefulness and the potential of network compression on practical applications. Moreover, interested readers should have basic knowledge about machine learning and deep learning to better understand the methods described in this book.
정보제공 :
목차
Chapter 1. Introduction.- Chapter 2. Binary Neural Networks.- Chapter 3. Binary Neural Architecture Search.- Chapter 4. Quantization of Neural Networks.- Chapter 5. Network Pruning.- Chapter 6. Applications.
