Deep Learning - Hardware Design | 拾書所

Deep Learning - Hardware Design

$ 680 元 原價 680

Preface

In 2012, the convolutional neural network (CNN) technology arrived at major breakthroughs. Since then, deep learning has become widely integrated into daily life via automotive, retail, healthcare and finance products. In 2016, the triumph of Alpha Go, as enabled by reinforcement learning (RL), further proved that the AI revolution is set to transform society––much as did the personal computer (in 1977), internet (in 1994), and the smartphone (in 2007.) Nonetheless, the revolution’s innovative efforts have thus far been focused on software development. Major hardware challenges, such as the following, remain little addressed:

•    Big input data
•    Deep neural network
•    Massive parallel processing
•    Reconfigurable network
•    Memory bottleneck
•    Intensive computation
•    Network pruning
•    Data sparsity

This book reviews various hardware designs, including the CPU, GPU and NPU. It also surveys special features aimed at resolving the above challenges. New hardware may be derived from the following designs for performance and power improvement:

•    Parallel architecture
•    Convolution optimization
•    In-memory computation
•    Near-memory architecture
•    Network optimization

The book is organized as follows:

•    Chapter 1: The neural network and its history
•    Chapter 2: The convolutional neural network model, it’s layer functions, and examples
•    Chapter 3: Parallel architectures––the Intel CPU, Nvidia GPU, Google TPU and Microsoft NPU)
•    Chapter 4: Optimizing convolution––the UCLA DCNN accelerator and MIT Eyeriss DNN
•    Chapter 5: The GT Neurocube architecture and Stanford Tetris DNN process with in-memory computation using Hybrid Memory Cube (HMC)
•    Chapter 6: Near-memory architecture––the ICT DaDianNao supercomputer and UofT Cnvlutin DNN accelerator
•    Chapter 7: Energy-efficient inference engines for network pruning


Future revisions will incorporate new approaches for enhancing deep learning hardware designs alongside other topics, including:

•    Distributive graph theory
•    High speed arithmetic
•    3D neural processing

Brand Slider