Run The Image Classification Demo; Set Up A Neural Network Model - IEI Technology Mustang-V100-MX4 User Manual

Table of Contents

Advertisement

Mustang-V100-MX4
1. Open a command prompt window.
2. Go to the Inference Engine demo directory:
cd
C:\Intel\computer_vision_sdk_<version>\deployment_tools\demo\
3. Run the demos by following the instructions in the next two sections.

5.5.1 Run the Image Classification Demo

5.5.2 Set Up a Neural Network Model

If you are running inference on hardware other than VPU-based devices, you already have
the required FP32 neural network model converted to an optimized Intermediate
Representation (IR). Follow the steps in the
sample.
If you want to run inference on a VPU device (Intel® Movidius™ Neural Compute Stick,
Intel® Neural Compute Stick 2 or Intel® Vision Accelerator Design with Intel® Movidius™
VPU), you'll need an FP16 version of the model, which you will set up in this paragraph.
To convert the FP32 model to a FP16 IR suitable for VPU-based hardware accelerators,
follow the steps below:
1.
Create a directory for the FP16 SqueezeNet Model, for example,
C:\Users\<username>\Documents\squeezenet1.1_FP16
2.
Open the Command Prompt and run the Model Optimizer to convert the FP32
Squeezenet Caffe* model delevered with the installation into an optimized FP16
Intermediate Representation (IR):
python3 "C:\Program Files
(x86)\IntelSWTools\openvino\deployment_tools\model_optimizer
\mo.py" --input_model
"C:\Users<username>\Documents\Intel\OpenVINO\openvino_models
Run the Sample Application
section to run the
Page 61

Advertisement

Table of Contents
loading

This manual is also suitable for:

Mustang-v100-mx4-r10

Table of Contents