Chapter 1: Overview
Introduction
The Xilinx® Deep Learning Processor Unit (DPU) is a programmable engine dedicated for convolutional
neural network. The unit contains register configure module, data controller module, and convolution
computing module. There is a specialized instruction set for DPU, which enables DPU to work efficiently
for many convolutional neural networks. The deployed convolutional neural network in DPU includes
VGG, ResNet, GoogLeNet, YOLO, SSD, MobileNet, FPN, etc.
The DPU IP can be integrated as a block in the programmable logic (PL) of the selected Zynq®-7000
SoC and Zynq UltraScale™+ MPSoC devices with direct connections to the processing system (PS). To
use DPU, you should prepare the instructions and input image data in the specific memory address that
DPU can access. The DPU operation also requires the application processing unit (APU) to service
interrupts to coordinate data transfer.
The top-level block diagram of DPU is shown in Figure 1.
High
PE
PE
PE
PE
Performance
Scheduler
Hybrid Computing Array
Host
CPU
Instruction
Global Memory Pool
Fetch Unit
DPU
High Speed Data Tube
RAM
X22327-022019
Figure 1: Top-Level Block Diagram
DPU IP Product Guide
www.xilinx.com
6
Send Feedback
PG338 (v1.2) March 26, 2019
Need help?
Do you have a question about the DPU IP and is the answer not in the manual?
Questions and answers