Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

MatMul fusion failed at scalar input #10950

Closed
yz6790 opened this issue Mar 19, 2022 · 2 comments
Closed

MatMul fusion failed at scalar input #10950

yz6790 opened this issue Mar 19, 2022 · 2 comments
Assignees

Comments

@yz6790
Copy link

yz6790 commented Mar 19, 2022

Describe the bug
When fusing MatMul and Mul for an input of rank=0, there is an error.

Perhaps it is because Mul outputs a result of rank=1, and the input's rank was not raised to 1 in fusion? Since MatMul does not support inputs of rank=0, this error happens?
Urgency
None.

System information

  • OS Platform and Distribution (e.g., Linux Ubuntu 16.04): macOS Mojave
  • ONNX Runtime installed from (source or binary): binary
  • ONNX Runtime version: 1.10.0
  • Python version: 3.9.7
  • Visual Studio version (if applicable): n/a
  • GCC/Compiler version (if compiling from source): n/a
  • CUDA/cuDNN version: n/a
  • GPU model and memory: n/a
  • PyTorch version: 1.10.2

To Reproduce

  • Describe steps/code to reproduce the behavior.
import torch

class Model(torch.nn.Module):
    @torch.no_grad()
    def forward(self, i1):
        y = torch.mul(i1, torch.ones(1,dtype=torch.float32))
        o2 = torch.matmul(y, torch.ones((1,64),dtype=torch.float32))

        return o2


model = Model()
inputs = (torch.randn((), dtype=torch.float32))
output_names = ["o1"]
torch.onnx.export(model, inputs, "1.onnx", verbose=False,
                  input_names=["i1"], output_names=output_names, opset_version=14)
  • Attach the ONNX model to the issue (where applicable) to expedite investigation.
    MatMul.onnx .zip

Expected behavior
MatMul will take the input as rank=1 after Mul.
Screenshots
屏幕快照 2022-03-19 上午2 26 36
屏幕快照 2022-03-18 下午11 20 26

@ytaous
Copy link
Contributor

ytaous commented Mar 31, 2022

thanks for reporting the issue, let me take a look

@ytaous
Copy link
Contributor

ytaous commented Apr 15, 2022

x1 = np.array((1.0)).astype(np.float32)
session = ort.InferenceSession("./1.onnx", providers=['CUDAExecutionProvider', 'CPUExecutionProvider'])
res = session.run(None,{"i1":x1})

@ytaous ytaous closed this as completed Apr 15, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants