Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[onnx] fix onnx where broadcast #10106

Merged
merged 4 commits into from
Feb 2, 2022
Merged

Conversation

lazycal
Copy link
Contributor

@lazycal lazycal commented Jan 30, 2022

Thanks for contributing to TVM! Please refer to guideline https://tvm.apache.org/docs/contribute/ for useful information and tips. After the pull request is submitted, please request code reviews from Reviewers by @ them in the pull request thread.

Previous logic for importing where only considers the shape of the tensors with the largest rank. This is incorrect, for instance when cond, x, and y are of the following shapes: [3,1], [2], [2]. The resulting shape should be [3,2], but original logic gives [3,1]. Below is the code snippet to reproduce.

from tvm import relay
import numpy as np
import onnx
from onnx import TensorProto, helper, mapping, numpy_helper


def get_onnx_model(condition, x, y):
    outdata = np.where(condition, x, y)
    dtype = TensorProto.FLOAT
    where_inputs = ["cond", "x", "y"]
    node = helper.make_node("Where", inputs=where_inputs, outputs=["out"])
    node_list = [node]
    graph = helper.make_graph(
        node_list,
        "where_test",
        inputs=[
            helper.make_tensor_value_info(
                "cond", TensorProto.BOOL, list(condition.shape)),
            helper.make_tensor_value_info("x", dtype, list(x.shape)),
            helper.make_tensor_value_info("y", dtype, list(y.shape)),
        ],
        outputs=[helper.make_tensor_value_info(
            "out", dtype, list(outdata.shape))],
    )
    model = helper.make_model(graph, producer_name="where_test")
    return model


def main():
    condition = np.random.uniform(size=(3, 1)) < 0.5
    x = np.random.uniform(size=(2,)).astype(np.float32)
    y = np.random.uniform(size=(2,)).astype(np.float32)
    model = get_onnx_model(condition, x, y)
    mod, params = relay.frontend.from_onnx(model, freeze_params=True)

    res = relay.build_module.create_executor('graph', mod).evaluate()(
        **{'cond': condition, 'x': x, 'y': y})
    assert np.allclose(res.asnumpy(), np.where(
        condition, x, y), rtol=0, atol=0)


main()

This PR simply delegates the broadcast logic to relay.where, instead of handling during import.

@lazycal
Copy link
Contributor Author

lazycal commented Jan 30, 2022

@Laurawly Can you take a look?

@lazycal
Copy link
Contributor Author

lazycal commented Jan 31, 2022

Not sure why but the test succeeded in my local computer, and the failed test is autotvm which does not seem related. Maybe it's stability issue of the CI?

@AndrewZhaoLuo
Copy link
Contributor

@lazycal yeah looks like a spurious error. You need to jostle ci by pushing an empty commit. E.g. git commit -m 'jostle ci' --allow-empty and git push

@AndrewZhaoLuo
Copy link
Contributor

Once more, looks like you got unlucky again :(

@AndrewZhaoLuo AndrewZhaoLuo merged commit 8727c60 into apache:main Feb 2, 2022
ylc pushed a commit to ylc/tvm that referenced this pull request Feb 16, 2022
* fix onnx where bcast

* jostle ci

* jostle ci

* jostle ci
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants