Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Bug] Dynamic Varsize error when tune detection model with autotvm #10042

Closed
Pony23333 opened this issue Jan 24, 2022 · 3 comments
Closed

[Bug] Dynamic Varsize error when tune detection model with autotvm #10042

Pony23333 opened this issue Jan 24, 2022 · 3 comments
Labels
needs-triage PRs or issues that need to be investigated by maintainers to find the right assignees to address it type: bug

Comments

@Pony23333
Copy link

Pony23333 commented Jan 24, 2022

Hi all,

I am a beginner of TVM and I found it's suppressingly fast when deploy on mobile devices compared with other frameworks like ncnn.

However, when I tried to tune a detection model (YOLOX) with autotvm on PC's CPU, I got a type error from converting Virtual axis. The demo I followed is from https://tvm.apache.org/docs/reference/api/python/autotvm.html and the error comes from the code:
tasks = autotvm.task.extract_from_program(mod["main"], target=target, params=params)

I guess the error is from the resize operator used in FPN for detection models. But even I fixed all the input shapes, there still are dynamic axis size (any_dim: int32).

I wonder if there is a solution or workaround. Thank you.

The TVM I use is of version '0.9.dev0'.

The error log is as below:

Exception in thread Thread-1:

Traceback (most recent call last):
  File "/home/jtma3/anaconda3/envs/nanodet/lib/python3.8/threading.py", line 932, in _bootstrap_inner
    self.run()
  File "/home/jtma3/anaconda3/envs/nanodet/lib/python3.8/threading.py", line 870, in run
    self._target(*self._args, **self._kwargs)
  File "/home/jtma3/anaconda3/envs/nanodet/lib/python3.8/site-packages/tvm-0.9.dev369+g24267492d-py3.8-linux-x86_64.egg/tvm/autotvm/task/relay_integration.py", line 55, in _lower
    compiler.lower(mod, target=target)
  File "/home/jtma3/anaconda3/envs/nanodet/lib/python3.8/site-packages/tvm-0.9.dev369+g24267492d-py3.8-linux-x86_64.egg/tvm/relay/backend/vm.py", line 155, in lower
    self._lower(mod, target, target_host)
  File "tvm/_ffi/_cython/./packed_func.pxi", line 323, in tvm._ffi._cy3.core.PackedFuncBase.__call__
  File "tvm/_ffi/_cython/./packed_func.pxi", line 257, in tvm._ffi._cy3.core.FuncCall
  File "tvm/_ffi/_cython/./packed_func.pxi", line 246, in tvm._ffi._cy3.core.FuncCall3
  File "tvm/_ffi/_cython/./base.pxi", line 163, in tvm._ffi._cy3.core.CALL
tvm._ffi.base.TVMError: Traceback (most recent call last):
  48: TVMFuncCall
  47: std::_Function_handler<void (tvm::runtime::TVMArgs, tvm::runtime::TVMRetValue*), tvm::relay::vm::VMCompiler::GetFunction(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, tvm::runtime::ObjectPtr<tvm::runtime::Object> const&)::{lambda(tvm::runtime::TVMArgs, tvm::runtime::TVMRetValue*)#1}>::_M_invoke(std::_Any_data const&, tvm::runtime::TVMArgs&&, tvm::runtime::TVMRetValue*&&)
  46: tvm::relay::vm::VMCompiler::Lower(tvm::IRModule, tvm::runtime::Map<tvm::Integer, tvm::Target, void, void>, tvm::Target)
  45: tvm::relay::vm::VMCompiler::OptimizeModuleImpl(tvm::IRModule)
  44: tvm::transform::Pass::operator()(tvm::IRModule) const
  43: tvm::transform::Pass::operator()(tvm::IRModule, tvm::transform::PassContext const&) const
  42: tvm::transform::SequentialNode::operator()(tvm::IRModule, tvm::transform::PassContext const&) const
  41: tvm::transform::Pass::operator()(tvm::IRModule, tvm::transform::PassContext const&) const
  40: tvm::transform::SequentialNode::operator()(tvm::IRModule, tvm::transform::PassContext const&) const
  39: tvm::transform::Pass::operator()(tvm::IRModule, tvm::transform::PassContext const&) const
  38: tvm::transform::ModulePassNode::operator()(tvm::IRModule, tvm::transform::PassContext const&) const
  37: std::_Function_handler<void (tvm::runtime::TVMArgs, tvm::runtime::TVMRetValue*), tvm::runtime::TypedPackedFunc<tvm::IRModule (tvm::IRModule, tvm::transform::PassContext)>::AssignTypedLambda<tvm::relay::tec::LowerTEPass(tvm::runtime::String const&, std::function<void (tvm::BaseFunc)>, tvm::VirtualDevice)::{lambda(tvm::IRModule, tvm::transform::PassContext)#1}>(tvm::relay::tec::LowerTEPass(tvm::runtime::String const&, std::function<void (tvm::BaseFunc)>, tvm::VirtualDevice)::{lambda(tvm::IRModule, tvm::transform::PassContext)#1})::{lambda(tvm::runtime::TVMArgs const&, tvm::runtime::TVMRetValue*)#1}>::_M_invoke(std::_Any_data const&, tvm::runtime::TVMArgs&&, tvm::runtime::TVMRetValue*&&)
  36: tvm::relay::tec::LowerTE(tvm::IRModule const&, tvm::runtime::String const&, std::function<void (tvm::BaseFunc)>, tvm::VirtualDevice)
  35: tvm::transform::Pass::operator()(tvm::IRModule) const
  34: tvm::transform::Pass::operator()(tvm::IRModule, tvm::transform::PassContext const&) const
  33: tvm::relay::transform::FunctionPassNode::operator()(tvm::IRModule, tvm::transform::PassContext const&) const
  32: std::_Function_handler<void (tvm::runtime::TVMArgs, tvm::runtime::TVMRetValue*), tvm::runtime::TypedPackedFunc<tvm::relay::Function (tvm::relay::Function, tvm::IRModule, tvm::transform::PassContext)>::AssignTypedLambda<tvm::relay::tec::LowerTensorExpr(tvm::runtime::String const&, tvm::relay::tec::TECompiler, std::function<void (tvm::BaseFunc)>, tvm::VirtualDevice)::{lambda(tvm::relay::Function, tvm::IRModule, tvm::transform::PassContext)#1}>(tvm::relay::tec::LowerTensorExpr(tvm::runtime::String const&, tvm::relay::tec::TECompiler, std::function<void (tvm::BaseFunc)>, tvm::VirtualDevice)::{lambda(tvm::relay::Function, tvm::IRModule, tvm::transform::PassContext)#1})::{lambda(tvm::runtime::TVMArgs const&, tvm::runtime::TVMRetValue*)#1}>::_M_invoke(std::_Any_data const&, tvm::runtime::TVMArgs&&, tvm::runtime::TVMRetValue*&&)
  31: tvm::relay::ExprMutator::VisitExpr(tvm::RelayExpr const&)
  30: _ZZN3tvm5relay11ExprFunc
  29: tvm::relay::transform::DeviceAwareExprMutator::VisitExpr_(tvm::relay::FunctionNode const*)
  28: tvm::relay::tec::LowerTensorExprMutator::DeviceAwareVisitExpr_(tvm::relay::FunctionNode const*)
  27: _ZN3tvm5relay9
  26: tvm::relay::ExprMutator::VisitExpr_(tvm::relay::FunctionNode const*)
  25: tvm::relay::ExprMutator::VisitExpr(tvm::RelayExpr const&)
  24: _ZZN3tvm5relay11ExprFunc
  23: tvm::relay::transform::DeviceAwareExprMutator::VisitExpr_(tvm::relay::LetNode const*)
  22: tvm::relay::tec::LowerTensorExprMutator::PreVisitLetBinding_(tvm::relay::Var const&, tvm::RelayExpr const&)
  21: tvm::relay::ExprMutator::VisitExpr(tvm::RelayExpr const&)
  20: _ZZN3tvm5relay11ExprFunc
  19: tvm::relay::transform::DeviceAwareExprMutator::VisitExpr_(tvm::relay::CallNode const*)
  18: tvm::relay::ExprMutator::VisitExpr(tvm::RelayExpr const&)
  17: _ZZN3tvm5relay11ExprFunc
  16: tvm::relay::transform::DeviceAwareExprMutator::VisitExpr_(tvm::relay::CallNode const*)
  15: tvm::relay::tec::LowerTensorExprMutator::DeviceAwareVisitExpr_(tvm::relay::CallNode const*)
  14: tvm::relay::tec::LowerTensorExprMutator::MakeLoweredCall(tvm::relay::Function, tvm::runtime::Array<tvm::RelayExpr, void>, tvm::Span, tvm::Target)
  13: tvm::relay::tec::TECompilerImpl::Lower(tvm::relay::tec::CCacheKey const&, tvm::runtime::String)
  12: tvm::relay::tec::TECompilerImpl::LowerInternal(tvm::relay::tec::CCacheKey const&, std::function<tvm::runtime::String (tvm::runtime::String)>)
  11: tvm::relay::tec::PrimFuncFor(tvm::relay::Function const&, tvm::Target const&, std::function<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > (std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >)>)
  10: tvm::relay::tec::ScheduleBuilder::Create(tvm::relay::Function const&, std::function<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > (std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >)>)
  9: tvm::relay::backend::MemoizedExprTranslator<tvm::runtime::Array<tvm::te::Tensor, void> >::VisitExpr(tvm::RelayExpr const&)
  8: _ZZN3tvm5relay11ExprFunc
  7: tvm::relay::tec::ScheduleBuilder::VisitExpr_(tvm::relay::CallNode const*)
  6: tvm::relay::backend::MemoizedExprTranslator<tvm::runtime::Array<tvm::te::Tensor, void> >::VisitExpr(tvm::RelayExpr const&)
  5: _ZZN3tvm5relay11ExprFunc
  4: tvm::relay::tec::ScheduleBuilder::VisitExpr_(tvm::relay::CallNode const*)
  3: tvm::relay::backend::MemoizedExprTranslator<tvm::runtime::Array<tvm::te::Tensor, void> >::VisitExpr(tvm::RelayExpr const&)
  2: _ZZN3tvm5relay11ExprFunc
  1: tvm::relay::tec::ScheduleBuilder::VisitExpr_(tvm::relay::CallNode const*)
  0: std::_Function_handler<void (tvm::runtime::TVMArgs, tvm::runtime::TVMRetValue*), TVMFuncCreateFromCFunc::{lambda(tvm::runtime::TVMArgs, tvm::runtime::TVMRetValue*)#2}>::_M_invoke(std::_Any_data const&, tvm::runtime::TVMArgs&&, tvm::runtime::TVMRetValue*&&)
  File "tvm/_ffi/_cython/./packed_func.pxi", line 56, in tvm._ffi._cy3.core.tvm_callback
  File "/home/jtma3/anaconda3/envs/nanodet/lib/python3.8/site-packages/tvm-0.9.dev369+g24267492d-py3.8-linux-x86_64.egg/tvm/relay/backend/te_compiler.py", line 314, in lower_call
    best_impl, outputs = select_implementation(
  File "/home/jtma3/anaconda3/envs/nanodet/lib/python3.8/site-packages/tvm-0.9.dev369+g24267492d-py3.8-linux-x86_64.egg/tvm/relay/backend/te_compiler.py", line 189, in select_implementation
    outs = best_plevel_impl.compute(attrs, inputs, out_type)
  File "/home/jtma3/anaconda3/envs/nanodet/lib/python3.8/site-packages/tvm-0.9.dev369+g24267492d-py3.8-linux-x86_64.egg/tvm/relay/op/op.py", line 126, in compute
    return _OpImplementationCompute(self, attrs, inputs, out_type)
  File "tvm/_ffi/_cython/./packed_func.pxi", line 323, in tvm._ffi._cy3.core.PackedFuncBase.__call__
  File "tvm/_ffi/_cython/./packed_func.pxi", line 267, in tvm._ffi._cy3.core.FuncCall
  File "tvm/_ffi/_cython/./base.pxi", line 163, in tvm._ffi._cy3.core.CALL
  3: TVMFuncCall
  2: std::_Function_handler<void (tvm::runtime::TVMArgs, tvm::runtime::TVMRetValue*), tvm::relay::{lambda(tvm::runtime::TVMArgs, tvm::runtime::TVMRetValue*)#4}>::_M_invoke(std::_Any_data const&, tvm::runtime::TVMArgs&&, tvm::runtime::TVMRetValue*&&)
  1: tvm::relay::OpImplementation::Compute(tvm::Attrs const&, tvm::runtime::Array<tvm::te::Tensor, void> const&, tvm::Type const&)
  0: std::_Function_handler<void (tvm::runtime::TVMArgs, tvm::runtime::TVMRetValue*), TVMFuncCreateFromCFunc::{lambda(tvm::runtime::TVMArgs, tvm::runtime::TVMRetValue*)#2}>::_M_invoke(std::_Any_data const&, tvm::runtime::TVMArgs&&, tvm::runtime::TVMRetValue*&&)
  File "tvm/_ffi/_cython/./packed_func.pxi", line 56, in tvm._ffi._cy3.core.tvm_callback
  File "/home/jtma3/anaconda3/envs/nanodet/lib/python3.8/site-packages/tvm-0.9.dev369+g24267492d-py3.8-linux-x86_64.egg/tvm/relay/op/strategy/generic.py", line 243, in _compute_conv2d
    return [topi_compute(*args)]
  File "/home/jtma3/anaconda3/envs/nanodet/lib/python3.8/site-packages/tvm-0.9.dev369+g24267492d-py3.8-linux-x86_64.egg/tvm/topi/x86/conv2d.py", line 129, in conv2d_nchw
    packed_out = conv2d_NCHWc(data, kernel, strides, padding, dilation, layout, layout, out_dtype)
  File "/home/jtma3/anaconda3/envs/nanodet/lib/python3.8/site-packages/tvm-0.9.dev369+g24267492d-py3.8-linux-x86_64.egg/tvm/autotvm/task/topi_integration.py", line 165, in wrapper
    node = topi_compute(cfg, *args)
  File "/home/jtma3/anaconda3/envs/nanodet/lib/python3.8/site-packages/tvm-0.9.dev369+g24267492d-py3.8-linux-x86_64.egg/tvm/topi/x86/conv2d.py", line 194, in conv2d_NCHWc
    cfg.define_split("tile_ic", in_channel, num_outputs=2)
  File "/home/jtma3/anaconda3/envs/nanodet/lib/python3.8/site-packages/tvm-0.9.dev369+g24267492d-py3.8-linux-x86_64.egg/tvm/autotvm/task/space.py", line 731, in define_split
    return self._add_new_transform(SplitSpace, name, axes, policy, **kwargs)
  File "/home/jtma3/anaconda3/envs/nanodet/lib/python3.8/site-packages/tvm-0.9.dev369+g24267492d-py3.8-linux-x86_64.egg/tvm/autotvm/task/space.py", line 834, in _add_new_transform
    axes = [x if isinstance(x, (VirtualAxis, Axis)) else self.axis(x) for x in axes]
  File "/home/jtma3/anaconda3/envs/nanodet/lib/python3.8/site-packages/tvm-0.9.dev369+g24267492d-py3.8-linux-x86_64.egg/tvm/autotvm/task/space.py", line 834, in <listcomp>
    axes = [x if isinstance(x, (VirtualAxis, Axis)) else self.axis(x) for x in axes]
  File "/home/jtma3/anaconda3/envs/nanodet/lib/python3.8/site-packages/tvm-0.9.dev369+g24267492d-py3.8-linux-x86_64.egg/tvm/autotvm/task/space.py", line 688, in axis
    return VirtualAxis(var)
  File "/home/jtma3/anaconda3/envs/nanodet/lib/python3.8/site-packages/tvm-0.9.dev369+g24267492d-py3.8-linux-x86_64.egg/tvm/autotvm/task/space.py", line 142, in __init__
    raise RuntimeError("Invalid type of axis: " + str(type(var)))
RuntimeError: Invalid type of axis: <class 'tvm.tir.expr.SizeVar'>
any_dim: int32
Traceback (most recent call last):
  File "autotvm_relay_x86.py", line 233, in <module>
    tasks = autotvm.task.extract_from_program(mod["main"], target=target, params=params)
  File "/home/jtma3/anaconda3/envs/nanodet/lib/python3.8/site-packages/tvm-0.9.dev369+g24267492d-py3.8-linux-x86_64.egg/tvm/autotvm/task/relay_integration.py", line 87, in extract_from_program
    return extract_from_multiple_program([mod], [params], target, ops=ops)
  File "/home/jtma3/anaconda3/envs/nanodet/lib/python3.8/site-packages/tvm-0.9.dev369+g24267492d-py3.8-linux-x86_64.egg/tvm/autotvm/task/relay_integration.py", line 153, in extract_from_multiple_program
    tsk = create(task_name, args, target=target)
  File "/home/jtma3/anaconda3/envs/nanodet/lib/python3.8/site-packages/tvm-0.9.dev369+g24267492d-py3.8-linux-x86_64.egg/tvm/autotvm/task/task.py", line 485, in create
    sch, _ = ret.func(*args)
  File "/home/jtma3/anaconda3/envs/nanodet/lib/python3.8/site-packages/tvm-0.9.dev369+g24267492d-py3.8-linux-x86_64.egg/tvm/autotvm/task/task.py", line 240, in __call__
    return self._default_func(*args, **kwargs)
  File "/home/jtma3/anaconda3/envs/nanodet/lib/python3.8/site-packages/tvm-0.9.dev369+g24267492d-py3.8-linux-x86_64.egg/tvm/autotvm/task/task.py", line 246, in _default_func
    out = self.fcompute(*args, **kwargs)
  File "/home/jtma3/anaconda3/envs/nanodet/lib/python3.8/site-packages/tvm-0.9.dev369+g24267492d-py3.8-linux-x86_64.egg/tvm/autotvm/task/topi_integration.py", line 165, in wrapper
    node = topi_compute(cfg, *args)
  File "/home/jtma3/anaconda3/envs/nanodet/lib/python3.8/site-packages/tvm-0.9.dev369+g24267492d-py3.8-linux-x86_64.egg/tvm/topi/x86/conv2d.py", line 194, in conv2d_NCHWc
    cfg.define_split("tile_ic", in_channel, num_outputs=2)
  File "/home/jtma3/anaconda3/envs/nanodet/lib/python3.8/site-packages/tvm-0.9.dev369+g24267492d-py3.8-linux-x86_64.egg/tvm/autotvm/task/space.py", line 731, in define_split
    return self._add_new_transform(SplitSpace, name, axes, policy, **kwargs)
  File "/home/jtma3/anaconda3/envs/nanodet/lib/python3.8/site-packages/tvm-0.9.dev369+g24267492d-py3.8-linux-x86_64.egg/tvm/autotvm/task/space.py", line 834, in _add_new_transform
    axes = [x if isinstance(x, (VirtualAxis, Axis)) else self.axis(x) for x in axes]
  File "/home/jtma3/anaconda3/envs/nanodet/lib/python3.8/site-packages/tvm-0.9.dev369+g24267492d-py3.8-linux-x86_64.egg/tvm/autotvm/task/space.py", line 834, in <listcomp>
    axes = [x if isinstance(x, (VirtualAxis, Axis)) else self.axis(x) for x in axes]
  File "/home/jtma3/anaconda3/envs/nanodet/lib/python3.8/site-packages/tvm-0.9.dev369+g24267492d-py3.8-linux-x86_64.egg/tvm/autotvm/task/space.py", line 688, in axis
    return VirtualAxis(var)
  File "/home/jtma3/anaconda3/envs/nanodet/lib/python3.8/site-packages/tvm-0.9.dev369+g24267492d-py3.8-linux-x86_64.egg/tvm/autotvm/task/space.py", line 142, in __init__
    raise RuntimeError("Invalid type of axis: " + str(type(var)))
RuntimeError: Invalid type of axis: <class 'tvm.tir.expr.SizeVar'>
@Pony23333
Copy link
Author

There is a workaround for this problem: use torchscript as frontend instead of onnx

@masahi
Copy link
Member

masahi commented Jun 19, 2022

Let's consolidate the discussion in #11780. As you probably found, if the PyTorch frontend works, this is due to ONNX frontend generating dynamic inputs when it shouldn't. For the same reason, MaskRCNN can be converted to TVM using PyTorch frontend, but not via ONNX.

@masahi masahi closed this as completed Jun 19, 2022
@masahi masahi reopened this Jun 24, 2022
@areusch areusch added the needs-triage PRs or issues that need to be investigated by maintainers to find the right assignees to address it label Oct 19, 2022
@driazati
Copy link
Member

@Pony23333 is this still an issue? the related discussion in #11780 appears to be fixed on main. cautiously closing this issue but feel free to re-open if this problem persists

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
needs-triage PRs or issues that need to be investigated by maintainers to find the right assignees to address it type: bug
Projects
None yet
Development

No branches or pull requests

4 participants