Model Optimizer Configuration Steps - IEI Technology Mustang-V100-MX4 User Manual

Table of Contents

Advertisement

.xml: Describes the network topology
.bin: Contains the weights and biases binary data
The Inference Engine reads, loads, and infers the IR files, using a common API across the
CPU, GPU, or VPU hardware.
The Model Optimizer is a Python*-based command line tool (mo.py), which is located in
C:\Intel\computer_vision_sdk_<version>\deployment_tools\model_optimizer, where
<version> is the version of the Intel® Distribution of OpenVINO™ toolkit that you installed.
Use this tool on models trained with popular deep learning frameworks such as Caffe,
TensorFlow, MXNet, and ONNX to convert them to an optimized IR format that the
Inference Engine can use.
This section explains how to use scripts to configure the Model Optimizer either for all of
the supported frameworks at the same time or for individual frameworks. If you want to
manually configure the Model Optimizer instead of using scripts, see the Using Manual
Configuration Process section in the Model Optimizer Developer Guide.
For more information about the Model Optimizer, see the
Guide.

5.6.4.1 Model Optimizer Configuration Steps

You can configure the Model Optimizer either for all supported frameworks at once or for
one framework at a time. Choose the option that best suits your needs. If you see error
messages, make sure you installed all dependencies.
Note: These steps use a command prompt to make sure you see error messages.
In the steps below:
- Replace <version> with the version number of your Intel® Distribution of OpenVINO™
toolkit
- If you did not install Intel® Distribution of OpenVINO™ toolkit to the default installation
directory, replace \Intel\ with the directory where you installed the software.
Page 76
Mustang-V100-MX4
Model Optimizer Developer

Advertisement

Table of Contents
loading

This manual is also suitable for:

Mustang-v100-mx4-r10

Table of Contents