Ntroduction; Figure 1-1: Mustang-V100-Mx4 - IEI Technology Mustang-V100-MX4 User Manual

Table of Contents

Advertisement

Mustang-V100-MX4
1.1 Introduction

Figure 1-1: Mustang-V100-MX4

The Mustang-V100-MX4 is a deep learning convolutional neural network acceleration
card for speeding up AI inference, in a flexible and scalable way. Equipped with Intel®
Movidius™ Myriad™ X Vision Processing Unit (VPU), the Mustang-V100-MX4 PCIe card
can be used with the existing system, enabling high-performance computing without
costing a fortune. VPUs can run AI faster, and is well suited for low power consumption
applications such as surveillance, retail and transportation. With the advantage of power
efficiency and high performance to dedicate DNN topologies, it is perfect to be
implemented in AI edge computing device to reduce total power usage, providing longer
duty time for the rechargeable edge computing equipment.
"Open Visual Inference & Neural Network Optimization (OpenVINO™) toolkit" is based on
convolutional neural networks (CNN), the toolkit extends workloads across Intel®
hardware and maximizes performance. It can optimize pre-trained deep learning model
such as Caffe, MXNET, Tensorflow into IR binary file then execute the inference engine
across Intel®-hardware heterogeneously such as CPU, GPU, Intel® Movidius™ Myriad X
VPU, and FPGA.
Page 2

Advertisement

Table of Contents
loading

This manual is also suitable for:

Mustang-v100-mx4-r10

Table of Contents