Configure The Model Optimizer - IEI Technology Mustang-V100-MX4 User Manual

Table of Contents

Advertisement

Mustang-V100-MX4
(Optional) The Intel Distribution of OpenVINO toolkit environment variables are removed
when you close the shell. As an option, you can permanently set the environment
variables as follows:
1. Open the .bashrc file in <user_directory>:
vi <user_directory>/.bashrc
2. Add this line to the end of the file:
source /opt/intel/computer_vision_sdk/bin/setupvars.sh
3. Save and close the file: press the Esc key and type :wq.
4. To test your change, open a new terminal. You will see [setupvars.sh] OpenVINO
environment initialized.
The environment variables are set. Continue to the next section to configure the Model
Optimizer.

4.6.4 Configure the Model Optimizer

Important: This section is required. You must configure the Model Optimizer for at least
one framework. The Model Optimizer will fail if you do not complete the steps in this
section.
The Model Optimizer is a key component of the Intel Distribution of OpenVINO toolkit. You
cannot do inference on your trained model without running the model through the Model
Optimizer. When you run a pre-trained model through the Model Optimizer, your output is
an Intermediate Representation (IR) of the network. The IR is a pair of files that describe
the whole model:
.xml: Describes the network topology
.bin: Contains the weights and biases binary data
The Inference Engine reads, loads, and infers the IR files, using a common API across the
CPU, GPU, or VPU hardware.
The Model Optimizer is a Python*-based command line tool (mo.py), which is located in
/opt/intel/computer_vision_sdk/deployment_tools/model_optimizer.
Page 35

Advertisement

Table of Contents
loading

This manual is also suitable for:

Mustang-v100-mx4-r10

Table of Contents