ngraph-onnx Build Status

nGraph Backend for ONNX.

This repository contains tools to run ONNX models using the Intel nGraph library as a backend.

Installation

Follow our build instructions to install nGraph-ONNX from sources.

Usage example

Importing an ONNX model

You can download models from the ONNX model zoo. For example ResNet-50:

$ wget https://s3.amazonaws.com/download.onnx/models/opset_8/resnet50.tar.gz
$ tar -xzvf resnet50.tar.gz

Use the following Python commands to convert the downloaded model to an nGraph model:

# Import ONNX and load an ONNX file from disk
>>> import onnx
>>> onnx_protobuf = onnx.load('resnet50/model.onnx')

# Convert ONNX model to an ngraph model
>>> from ngraph_onnx.onnx_importer.importer import import_onnx_model
>>> ng_function = import_onnx_model(onnx_protobuf)

# The importer returns a list of ngraph models for every ONNX graph output:
>>> print(ng_function)
<Function: 'resnet50' ([1, 1000])>

This creates an nGraph Function object, which can be used to execute a computation on a chosen backend.

Running a computation

After importing an ONNX model, you will have an nGraph Function object. Now you can create an nGraph Runtime backend and use it to compile your Function to a backend-specific Computation object. Finally, you can execute your model by calling the created Computation object with input data.

# Using an ngraph runtime (CPU backend) create a callable computation object
>>> import ngraph as ng
>>> runtime = ng.runtime(backend_name='CPU')
>>> resnet_on_cpu = runtime.computation(ng_function)

# Load an image (or create a mock as in this example)
>>> import numpy as np
>>> picture = np.ones([1, 3, 224, 224], dtype=np.float32)

# Run computation on the picture:
>>> resnet_on_cpu(picture)
[array([[2.16105007e-04, 5.58412226e-04, 9.70510227e-05, 5.76671446e-05,
         7.45318757e-05, 4.80892748e-04, 5.67404088e-04, 9.48728994e-05,
         ...