Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support per-test tolerances for ONNX tests #11775

Merged
merged 18 commits into from
Jun 14, 2022
28 changes: 18 additions & 10 deletions docs/How_To_Update_ONNX_Dev_Notes.md
Original file line number Diff line number Diff line change
@@ -1,9 +1,11 @@
This is a note only for ONNX Runtime developers.
# How to update ONNX

It's very often, you need to update the ONNX submodule to a newer version in the upstream. Please follow the steps below, don't miss any!
This note is only for ONNX Runtime developers.

1. Update the ONNX subfolder
```
If you need to update the ONNX submodule to a different version, follow the steps below.

1. Update the ONNX submodule
```sh
cd cmake/external/onnx
git remote update
git reset --hard <commit_id>
Expand All @@ -15,22 +17,28 @@ git add onnx
1. Update [cgmanifests/generated/cgmanifest.json](/cgmanifests/generated/cgmanifest.json).
This file should be generated. See [cgmanifests/README](/cgmanifests/README.md) for instructions.

1. Update [tools/ci_build/github/linux/docker/scripts/requirements.txt](/tools/ci_build/github/linux/docker/scripts/requirements.txt) and [tools/ci_build/github/linux/docker/scripts/manylinux/requirements.txt](/tools/ci_build/github/linux/docker/scripts/manylinux/requirements.txt).
Update the commit hash for `git+http://github.com/onnx/onnx.git@targetonnxcommithash#egg=onnx`.
1. Update [tools/ci_build/github/linux/docker/scripts/requirements.txt](/tools/ci_build/github/linux/docker/scripts/requirements.txt)
and [tools/ci_build/github/linux/docker/scripts/manylinux/requirements.txt](/tools/ci_build/github/linux/docker/scripts/manylinux/requirements.txt).
Update the commit hash for `git+http://github.com/onnx/onnx.git@targetonnxcommithash#egg=onnx`.

1. If there is any change to `cmake/external/onnx/onnx/*.in.proto`, you need to regenerate OnnxMl.cs. [Building onnxruntime with Nuget](https://onnxruntime.ai/docs/build/inferencing.html#build-nuget-packages) will do this.
1. If there is any change to `cmake/external/onnx/onnx/*.in.proto`, you need to regenerate OnnxMl.cs.
[Building onnxruntime with Nuget](https://onnxruntime.ai/docs/build/inferencing.html#build-nuget-packages) will do
this.

1. If you are updating ONNX from a released tag to a new commit, please tell Changming deploying the new test data along with other test models to our CI build machines. This is to ensure that our tests cover every ONNX opset.
1. If you are updating ONNX from a released tag to a new commit, please ask Changming (@snnn) to deploy the new test
data along with other test models to our CI build machines. This is to ensure that our tests cover every ONNX opset.

1. Send you PR, and **manually** queue a build for every packaging pipeline for your branch.

1. If there is a build failure in stage "Check out of dated documents" in WebAssembly CI pipeline, update ONNX Runtime Web WebGL operator support document:
1. If there is a build failure in stage "Check out of dated documents" in WebAssembly CI pipeline, update ONNX Runtime
Web WebGL operator support document:
- Make sure Node.js is installed (see [Prerequisites](../js/README.md#Prerequisites) for instructions).
- Follow step 1 in [js/Build](../js/README.md#Build-2) to install dependencies).
- Follow instructions in [Generate document](../js/README.md#Generating-Document) to update document. Commit changes applied to file `docs/operators.md`.

1. Usually there would be some unitest failures, because you introduced new test cases. Then you may need to update
1. Usually some newly introduced tests will fail. Then you may need to update
- [onnxruntime/test/onnx/main.cc](/onnxruntime/test/onnx/main.cc)
- [onnxruntime/test/providers/cpu/model_tests.cc](/onnxruntime/test/providers/cpu/model_tests.cc)
- [csharp/test/Microsoft.ML.OnnxRuntime.Tests/InferenceTest.cs](/csharp/test/Microsoft.ML.OnnxRuntime.Tests/InferenceTest.cs)
- [onnxruntime/test/testdata/onnx_backend_test_series_filters.jsonc](/onnxruntime/test/testdata/onnx_backend_test_series_filters.jsonc)
- [onnxruntime/test/testdata/onnx_backend_test_series_overrides.jsonc](/onnxruntime/test/testdata/onnx_backend_test_series_overrides.jsonc)
16 changes: 8 additions & 8 deletions js/node/test/test-runner.ts
Original file line number Diff line number Diff line change
Expand Up @@ -2,10 +2,10 @@
// Licensed under the MIT License.

import * as fs from 'fs-extra';
import {InferenceSession, Tensor} from 'onnxruntime-common';
import { InferenceSession, Tensor } from 'onnxruntime-common';
import * as path from 'path';

import {assertTensorEqual, loadTensorFromFile, shouldSkipModel} from './test-utils';
import { atol, assertTensorEqual, loadTensorFromFile, rtol, shouldSkipModel } from './test-utils';

export function run(testDataFolder: string): void {
const models = fs.readdirSync(testDataFolder);
Expand All @@ -14,7 +14,7 @@ export function run(testDataFolder: string): void {
// read each model folders
const modelFolder = path.join(testDataFolder, model);
let modelPath: string;
const modelTestCases: Array<[Array<Tensor|undefined>, Array<Tensor|undefined>]> = [];
const modelTestCases: Array<[Array<Tensor | undefined>, Array<Tensor | undefined>]> = [];
for (const currentFile of fs.readdirSync(modelFolder)) {
const currentPath = path.join(modelFolder, currentFile);
const stat = fs.lstatSync(currentPath);
Expand All @@ -24,14 +24,14 @@ export function run(testDataFolder: string): void {
modelPath = currentPath;
}
} else if (stat.isDirectory()) {
const inputs: Array<Tensor|undefined> = [];
const outputs: Array<Tensor|undefined> = [];
const inputs: Array<Tensor | undefined> = [];
const outputs: Array<Tensor | undefined> = [];
for (const dataFile of fs.readdirSync(currentPath)) {
const dataFileFullPath = path.join(currentPath, dataFile);
const ext = path.extname(dataFile);

if (ext.toLowerCase() === '.pb') {
let tensor: Tensor|undefined;
let tensor: Tensor | undefined;
try {
tensor = loadTensorFromFile(dataFileFullPath);
} catch (e) {
Expand All @@ -51,7 +51,7 @@ export function run(testDataFolder: string): void {

// add cases
describe(`${model}`, () => {
let session: InferenceSession|null = null;
let session: InferenceSession | null = null;
let skipModel = shouldSkipModel(model, ['cpu']);
if (!skipModel) {
before(async () => {
Expand Down Expand Up @@ -98,7 +98,7 @@ export function run(testDataFolder: string): void {

let j = 0;
for (const name of session.outputNames) {
assertTensorEqual(outputs[name], expectedOutputs[j++]!);
assertTensorEqual(outputs[name], expectedOutputs[j++]!, atol(model), rtol(model));
}
} else {
throw new TypeError('session is null');
Expand Down
55 changes: 38 additions & 17 deletions js/node/test/test-utils.ts
Original file line number Diff line number Diff line change
Expand Up @@ -3,9 +3,9 @@

import assert from 'assert';
import * as fs from 'fs-extra';
import {jsonc} from 'jsonc';
import { jsonc } from 'jsonc';
import * as onnx_proto from 'onnx-proto';
import {InferenceSession, Tensor} from 'onnxruntime-common';
import { InferenceSession, Tensor } from 'onnxruntime-common';
import * as path from 'path';

export const TEST_ROOT = __dirname;
Expand All @@ -17,8 +17,8 @@ export const NODE_TESTS_ROOT = path.join(ORT_ROOT, 'cmake/external/onnx/onnx/bac
export const SQUEEZENET_INPUT0_DATA: number[] = require(path.join(TEST_DATA_ROOT, 'squeezenet.input0.json'));
export const SQUEEZENET_OUTPUT0_DATA: number[] = require(path.join(TEST_DATA_ROOT, 'squeezenet.output0.json'));

export const BACKEND_TEST_SERIES_FILTERS: {[name: string]: string[]} =
jsonc.readSync(path.join(ORT_ROOT, 'onnxruntime/test/testdata/onnx_backend_test_series_filters.jsonc'));
export const BACKEND_TEST_SERIES_FILTERS: { [name: string]: string[] } =
jsonc.readSync(path.join(ORT_ROOT, 'onnxruntime/test/testdata/onnx_backend_test_series_filters.jsonc'));


export const NUMERIC_TYPE_MAP = new Map<Tensor.Type, new (len: number) => Tensor.DataType>([
Expand Down Expand Up @@ -54,7 +54,7 @@ export function createTestData(type: Tensor.Type, length: number): Tensor.DataTy
}

// a simple function to create a tensor for test
export function createTestTensor(type: Tensor.Type, lengthOrDims?: number|number[]): Tensor {
export function createTestTensor(type: Tensor.Type, lengthOrDims?: number | number[]): Tensor {
let length = 100;
let dims = [100];
if (typeof lengthOrDims === 'number') {
Expand All @@ -70,22 +70,24 @@ export function createTestTensor(type: Tensor.Type, lengthOrDims?: number|number

// call the addon directly to make sure DLL is loaded
export function warmup(): void {
describe('Warmup', async function() {
describe('Warmup', async function () {
// eslint-disable-next-line no-invalid-this
this.timeout(0);
// we have test cases to verify correctness in other place, so do no check here.
try {
const session = await InferenceSession.create(path.join(TEST_DATA_ROOT, 'test_types_INT32.pb'));
await session.run({input: new Tensor(new Float32Array(5), [1, 5])}, {output: null}, {});
await session.run({ input: new Tensor(new Float32Array(5), [1, 5]) }, { output: null }, {});
} catch (e) {
}
});
}

export function assertFloatEqual(
actual: number[]|Float32Array|Float64Array, expected: number[]|Float32Array|Float64Array): void {
const THRESHOLD_ABSOLUTE_ERROR = 1.0e-4;
const THRESHOLD_RELATIVE_ERROR = 1.000001;
actual: number[] | Float32Array | Float64Array, expected: number[] | Float32Array | Float64Array,
atol?: number, rtol?: number): void {

const absolute_tol: number = atol ?? 1.0e-4;
const relative_tol: number = 1 + (rtol ?? 1.0e-6);
jcwchen marked this conversation as resolved.
Show resolved Hide resolved

assert.strictEqual(actual.length, expected.length);

Expand Down Expand Up @@ -114,10 +116,10 @@ export function assertFloatEqual(
// test fail
// endif
//
if (Math.abs(a - b) < THRESHOLD_ABSOLUTE_ERROR) {
if (Math.abs(a - b) < absolute_tol) {
continue; // absolute error check pass
}
if (a !== 0 && b !== 0 && a * b > 0 && a / b < THRESHOLD_RELATIVE_ERROR && b / a < THRESHOLD_RELATIVE_ERROR) {
if (a !== 0 && b !== 0 && a * b > 0 && a / b < relative_tol && b / a < relative_tol) {
continue; // relative error check pass
}

Expand All @@ -126,12 +128,15 @@ export function assertFloatEqual(
}
}

export function assertDataEqual(type: Tensor.Type, actual: Tensor.DataType, expected: Tensor.DataType): void {
export function assertDataEqual(
type: Tensor.Type, actual: Tensor.DataType, expected: Tensor.DataType, atol?: number, rtol?: number): void {

switch (type) {
case 'float32':
case 'float64':
assertFloatEqual(
actual as number[] | Float32Array | Float64Array, expected as number[] | Float32Array | Float64Array);
actual as number[] | Float32Array | Float64Array, expected as number[] | Float32Array | Float64Array,
atol, rtol);
break;

case 'uint8':
Expand All @@ -153,7 +158,7 @@ export function assertDataEqual(type: Tensor.Type, actual: Tensor.DataType, expe
}

// This function check whether 2 tensors should be considered as 'match' or not
export function assertTensorEqual(actual: Tensor, expected: Tensor): void {
export function assertTensorEqual(actual: Tensor, expected: Tensor, atol?: number, rtol?: number): void {
assert(typeof actual === 'object');
assert(typeof expected === 'object');

Expand All @@ -168,7 +173,7 @@ export function assertTensorEqual(actual: Tensor, expected: Tensor): void {
assert.strictEqual(actualType, expectedType);
assert.deepStrictEqual(actualDims, expectedDims);

assertDataEqual(actualType, actual.data, expected.data);
assertDataEqual(actualType, actual.data, expected.data, atol, rtol);
}

export function loadTensorFromFile(pbFile: string): Tensor {
Expand Down Expand Up @@ -243,7 +248,7 @@ export function loadTensorFromFile(pbFile: string): Tensor {
throw new Error(`not supported tensor type: ${tensorProto.dataType}`);
}
const transferredTypedArrayRawDataView =
new Uint8Array(transferredTypedArray.buffer, transferredTypedArray.byteOffset, tensorProto.rawData.byteLength);
new Uint8Array(transferredTypedArray.buffer, transferredTypedArray.byteOffset, tensorProto.rawData.byteLength);
transferredTypedArrayRawDataView.set(tensorProto.rawData);

return new Tensor(type, transferredTypedArray, dims);
Expand Down Expand Up @@ -274,3 +279,19 @@ export function shouldSkipModel(model: string, eps: string[]): boolean {

return false;
}

const OVERRIDES: { [key: string]: (number | { [name: string]: number }) } =
jsonc.readSync(path.join(ORT_ROOT, 'onnxruntime/test/testdata/onnx_backend_test_series_overrides.jsonc'));

const ATOL_DEFAULT = OVERRIDES.atol_default as number;
const RTOL_DEFAULT = OVERRIDES.rtol_default as number;

export function atol(model: string): number {
const override = OVERRIDES.atol_overrides as { [name: string]: number };
return override[model] ?? ATOL_DEFAULT;
}

export function rtol(model: string): number {
const override = OVERRIDES.rtol_overrides as { [name: string]: number };
return override[model] ?? RTOL_DEFAULT;
}
Loading