IEI Technology Mustang-V100-MX4 User Manual page 44

Table of Contents

Advertisement

Use this tool on models trained with popular deep learning frameworks such as Caffe*,
TensorFlow*, MXNet*, and ONNX* to convert them to an optimized IR format that the
Inference Engine can use.
This section explains how to use scripts to configure the Model Optimizer either for all of
the supported frameworks at the same time or for individual frameworks. If you want to
manually configure the Model Optimizer instead of using scripts, see the
configuration
process section in the Model Optimizer Developer Guide.
For more information about the Model Optimizer, see the
Guide.
Model Optimizer configuration steps
You can either configure the Model Optimizer for all supported frameworks at once, or for
one framework at a time. Choose the option that best suits your needs. If you see error
messages, make sure you installed all dependencies.
Note: If you did not install the Intel Distribution of OpenVINO toolkit to the default
installation directory, replace /intel/ with the directory where you installed the software to.
Option 1: Configure the Model Optimizer for all supported frameworks at the same time:
1. Go to the Model Optimizer prerequisites directory:
cd
/opt/intel/computer_vision_sdk/deployment_tools/model_optimize
r/install_prerequisites
2. Run the script to configure the Model Optimizer for Caffe, TensorFlow, MXNet, Kaldi*,
and ONNX:
sudo ./install_prerequisites.sh
Option 2: Configure the Model Optimizer for each framework separately:
1. Go to the Model Optimizer prerequisites directory:
cd
/opt/intel/computer_vision_sdk/deployment_tools/model_optimize
r/install_prerequisites
2. Run the script for your model framework. You can run more than one script:
Page 36
Mustang-V100-MX4
using manual
Model Optimizer Developer

Advertisement

Table of Contents
loading

This manual is also suitable for:

Mustang-v100-mx4-r10

Table of Contents