Chapter 2: Product Specification; Hardware Architecture - Xilinx DPU IP Product Manual

Dpu for convolutional neural network v1.2
Table of Contents

Advertisement

Hardware Architecture

The detailed hardware architecture of DPU is shown in Figure 6. After start-up, DPU fetches instructions
from the off-chip memory and parses instructions to operate the computing engine. The instructions
are generated by the DNNDK compiler where substantial optimizations have been performed.
To improve the efficiency, abundant on-chip memory in Xilinx® devices is used to buffer the
intermediate data, input, and output data. The data is reused as much as possible to reduce the
memory bandwidth. Deep pipelined design is used for the computing engine. Like other accelerators,
the computational arrays (PE) take full advantage of the fine-grained building blocks, which includes
multiplier, adder, accumulator, etc. in Xilinx devices.
DPU IP Product Guide
PG338 (v1.2) March 26, 2019

Chapter 2: Product Specification

Processing System (PS)
CPU (DNNDK)
Bus
Fetcher
Data Mover
Decoder
On-Chip BRAM
Dispatcher
BRAM Reader/Writer
PE
PE
Programmable Logic (PL)
Figure 6: DPU Hardware Architecture
www.xilinx.com
Off-Chip Memory
Memory Controller
PE
X22332-022019
10
Send Feedback

Hide quick links:

Advertisement

Table of Contents
loading
Need help?

Need help?

Do you have a question about the DPU IP and is the answer not in the manual?

Questions and answers

Subscribe to Our Youtube Channel

This manual is also suitable for:

B512B1024B1152B1600B800B3136 ... Show all

Table of Contents