Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

OpenVINO resnet_50_v1 python example with VNNI #1582

Merged
merged 13 commits into from
Aug 30, 2019
Merged
Show file tree
Hide file tree
Changes from 8 commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
52 changes: 52 additions & 0 deletions pyzoo/zoo/examples/vnni/openvino/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,52 @@
## OpenVINO ResNet_v1_50 example
This example illustrates how to use a pre-trained OpenVINO optimized model to make inferences with OpenVINO toolkit as backend using Analytics Zoo.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

One blank line is enough

## Install or download Analytics Zoo
Follow the instructions [here](https://analytics-zoo.github.io/master/#PythonUserGuide/install/) to install analytics-zoo via __pip__ or __download the prebuilt package__.

## PrepareOpenVINOResNet
TensorFlow models cannot be directly loaded by OpenVINO. It should be converted to OpenVINO optimized model and int8 optimized model first. You can use PrepareOpenVINOResNet or [OpenVINO toolkit](https://docs.openvinotoolkit.org/2018_R5/_docs_MO_DG_prepare_model_convert_model_Convert_Model_From_TensorFlow.html) to finish this job. Herein, we focused on PrepareOpenVINOResNet.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

and -> or?
focused -> focus?


Download [TensorFlow ResNet50_v1](http://download.tensorflow.org/models/resnet_v1_50_2016_08_28.tar.gz), [validation image set](https://s3-ap-southeast-1.amazonaws.com/analytics-zoo-models/openvino/val_bmp_32.tar) and [OpenCVLibs](https://s3-ap-southeast-1.amazonaws.com/analytics-zoo-models/openvino/opencv_4.0.0_ubuntu_lib.tar). Extract files from these packages.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we can just add a link to original document and remove model convert section.


```bash
export SPARK_HOME=the root directory of Spark
export ANALYTICS_ZOO_HOME=the folder where you extract the downloaded Analytics Zoo zip package
export ANALYTICS_ZOO_JAR=export ANALYTICS_ZOO_JAR=`find ${ANALYTICS_ZOO_HOME}/lib -type f -name "analytics-zoo*jar-with-dependencies.jar"`

MODEL_PATH=dir of resetNet 50 checkpoint, i.e., resnet_v1_50.ckpt
VALIDATION=dir of validation images and val.txt, i.e., val_bmp_32
OPENCVLIBS=dir of OpenCV libs

java -cp ${ANALYTICS_ZOO_JAR}:${SPARK_HOME}/jars/* \
com.intel.analytics.zoo.examples.vnni.openvino.PrepareOpenVINOResNet \
-m ${MODEL_PATH} -v ${VALIDATION} -l ${OPENCVLIBS}
```

__Options:__
- `-m` `--model`: The dir of resetNet 50 checkpoint.
- `-b` `--batchSize`: The batch size of input data. Default is 4.
- `-l` `--openCVLibs`: The number of iterations to run the performance test. Default is 1.
- `-v` `--validationFilePath`: dir of validation images and val.txt.
- `--subset`: Number of images in validation file path. Note that it should be align with val.txt.


__Sample Result files in MODEL_PATH__:
```
resnet_v1_50.ckpt
resnet_v1_50_inference_graph.bin
resnet_v1_50_inference_graph-calibrated.bin
resnet_v1_50_inference_graph-calibrated.xml
resnet_v1_50_inference_graph.mapping
resnet_v1_50_inference_graph.xml
```

Amount them, `resnet_v1_50_inference_graph.xml` and `resnet_v1_50_inference_graph.bin` are OpenVINO optimized ResNet_v1_50 model and weight, `resnet_v1_50_inference_graph-calibrated.xml` and `resnet_v1_50_inference_graph-calibrated.bin` are OpenVINO int8 optimized ResNet_v1_50 model and weight. Both of them can be loaded by OpenVINO or Zoo.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

amount or among?


## Options
* `--image` The path where the images are stored. It can be either a folder or an image path. Local file system, HDFS and Amazon S3 are supported.
* `--model` The path to the TensorFlow object detection model.
* `--partition_num` The number of partitions.

## Results
We print the inference result of each batch.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pls add detailed running command for this example. :)

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Any sample output to put here?

62 changes: 62 additions & 0 deletions pyzoo/zoo/examples/vnni/openvino/inference.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,62 @@
from optparse import OptionParser

from zoo.pipeline.inference import InferenceModel
from zoo.common.nncontext import init_nncontext
from zoo.feature.image import *
from zoo.pipeline.nnframes import *

batch_size = 4
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

global variable should in up case



def predict(model_path, img_path, partition_num):
model = InferenceModel()
model.load_openvino(model_path,
weight_path=model_path[:model_path.rindex(".")] + ".bin",
batch_size=batch_size)
sc = init_nncontext("OpenVINO Object Detection Inference Example")
infer_transformer = ChainedPreprocessing([ImageBytesToMat(),
ImageResize(256, 256),
ImageCenterCrop(224, 224),
ImageMatToTensor(format="NHWC", to_RGB=True)])
image_set = ImageSet.read(img_path, sc, partition_num).\
transform(infer_transformer).get_image().collect()
image_set = np.expand_dims(image_set, axis=1)

if len(image_set) % batch_size == 0:
a = 0
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pls use more meaningful variable name.

size = batch_size
else:
a = 1
size = len(image_set) % batch_size
for i in range(len(image_set) // batch_size + a):
index = i * batch_size
batch = image_set[index]
for j in range(index + 1, index + size):
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Size used here is not correct. For example,

batch_size = 4
len(image_set)==11
size = 11 % 4 = 3

Then, you will only take 3 images for each batch.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think you can use min(index + batch_size, len(image_set)) in this place. Such that you don't need a and size.

batch = np.vstack((batch, image_set[j]))
batch = np.expand_dims(batch, axis=0)

predictions = model.predict(batch)

result = predictions[0]

print("batch_" + str(i))
for r in result:
output = {}
max_index = np.argmax(r)
output["Top-1"] = str(max_index)
print("* Predict result " + str(output))


if __name__ == "__main__":
parser = OptionParser()
parser.add_option("--image", type=str, dest="img_path",
help="The path where the images are stored, "
"can be either a folder or an image path")
parser.add_option("--model", type=str, dest="model_path",
help="Zoo Model Path")
parser.add_option("--partition_num", type=int, dest="partition_num", default=4,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think partition_num is necessary. Because current example is a local example. :)

help="The number of partitions")

(options, args) = parser.parse_args(sys.argv)

predict(options.model_path, options.img_path, options.partition_num)