-
Notifications
You must be signed in to change notification settings - Fork 3.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Conv2DTransposed] Fix wrong shape check and add new TOPI module to support groups #9465
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thank you much for the quick fix!! Would you like to include your minimal reproducible example as a unittest test case so that such regression could always be tested properly?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please add a minimum test and fix the lint.
@Hzfengsy @junrushao1994 I've linted the code and added the test cases. However, current TOPI does not support groups in conv2dtranspose and I temporarilly mark the test as |
Thank you all for the effort! @Lyken17 let's do the last mile, fixing the merge conflict and get the CI green :-) |
@junrushao1994 resolved, please have a check. |
Looks good! Please fix the lint btw |
Lint fixed. Let wait for the green CI! |
Thank you SO much @Lyken17 @alicja-SiMa-ai @vinx13 @AndrewZhaoLuo @Hzfengsy and many! |
…upport groups (apache#9465) * f wrong type check in conv2d_transpose * add test case for conv2d transpose * add groups support for conv2d_transpose * add naive implementation and schedule for conv2d with groups * enable tests for cpu and arm_cpu, raise error for cuda platform * revert the cuda and generic strategy * revert back the x86 strategy * revert back the arm_cpu strategy * revert back the arm_cpu strategy * revert back the arm_cpu strategy * fix EOF of x86 * clang lint updated c++ code * update topi implementation * Revert test * Revert test * add generic/x86/arm specialization for conv2d_transpose with groups > 1 * remove commentted codes * fix lint * fix lint * fix c++ lint * fix lint * fix python lint * remove comments and reformat * lint file * lint code * fix lint * update logging information in convolution.h Co-authored-by: Alicja Kwasniewska <alicja.kwasniewska@sima.ai>
…upport groups (apache#9465) * f wrong type check in conv2d_transpose * add test case for conv2d transpose * add groups support for conv2d_transpose * add naive implementation and schedule for conv2d with groups * enable tests for cpu and arm_cpu, raise error for cuda platform * revert the cuda and generic strategy * revert back the x86 strategy * revert back the arm_cpu strategy * revert back the arm_cpu strategy * revert back the arm_cpu strategy * fix EOF of x86 * clang lint updated c++ code * update topi implementation * Revert test * Revert test * add generic/x86/arm specialization for conv2d_transpose with groups > 1 * remove commentted codes * fix lint * fix lint * fix c++ lint * fix lint * fix python lint * remove comments and reformat * lint file * lint code * fix lint * update logging information in convolution.h Co-authored-by: Alicja Kwasniewska <alicja.kwasniewska@sima.ai>
…upport groups (apache#9465) * f wrong type check in conv2d_transpose * add test case for conv2d transpose * add groups support for conv2d_transpose * add naive implementation and schedule for conv2d with groups * enable tests for cpu and arm_cpu, raise error for cuda platform * revert the cuda and generic strategy * revert back the x86 strategy * revert back the arm_cpu strategy * revert back the arm_cpu strategy * revert back the arm_cpu strategy * fix EOF of x86 * clang lint updated c++ code * update topi implementation * Revert test * Revert test * add generic/x86/arm specialization for conv2d_transpose with groups > 1 * remove commentted codes * fix lint * fix lint * fix c++ lint * fix lint * fix python lint * remove comments and reformat * lint file * lint code * fix lint * update logging information in convolution.h Co-authored-by: Alicja Kwasniewska <alicja.kwasniewska@sima.ai>
…upport groups (apache#9465) * f wrong type check in conv2d_transpose * add test case for conv2d transpose * add groups support for conv2d_transpose * add naive implementation and schedule for conv2d with groups * enable tests for cpu and arm_cpu, raise error for cuda platform * revert the cuda and generic strategy * revert back the x86 strategy * revert back the arm_cpu strategy * revert back the arm_cpu strategy * revert back the arm_cpu strategy * fix EOF of x86 * clang lint updated c++ code * update topi implementation * Revert test * Revert test * add generic/x86/arm specialization for conv2d_transpose with groups > 1 * remove commentted codes * fix lint * fix lint * fix c++ lint * fix lint * fix python lint * remove comments and reformat * lint file * lint code * fix lint * update logging information in convolution.h Co-authored-by: Alicja Kwasniewska <alicja.kwasniewska@sima.ai>
…upport groups (apache#9465) * f wrong type check in conv2d_transpose * add test case for conv2d transpose * add groups support for conv2d_transpose * add naive implementation and schedule for conv2d with groups * enable tests for cpu and arm_cpu, raise error for cuda platform * revert the cuda and generic strategy * revert back the x86 strategy * revert back the arm_cpu strategy * revert back the arm_cpu strategy * revert back the arm_cpu strategy * fix EOF of x86 * clang lint updated c++ code * update topi implementation * Revert test * Revert test * add generic/x86/arm specialization for conv2d_transpose with groups > 1 * remove commentted codes * fix lint * fix lint * fix c++ lint * fix lint * fix python lint * remove comments and reformat * lint file * lint code * fix lint * update logging information in convolution.h Co-authored-by: Alicja Kwasniewska <alicja.kwasniewska@sima.ai>
This PR includes two bug fixs. The first one is the wrong shape check in
src/relay/op/nn/convolution.h
, the other is about enablegroups > 1
in current relay generic as well as specializations.Shape check
The default shape format of TVM is
N x C x iH x iW
for input andI x O x kH x kW
for weight, a proper shape for Conv2dTransposed should beTherefore the original
channel
checking.is wrong. This should be
wshape[0]
rather thanwshape[1]
.Thus the original
groups
checkingis wrong. The proper comparison dimension should be
wshape[1]
rather thanwshape[0]
.Besides, the name for debug is not correct either. All logging information are using
conv2d
rather thanconv2d_transposed
, which is confusing.Example to trigger error in current implementation
Enable groups > 1 for Conv2dTranspose
This bug was originally raised in #8182 and #8799. Since the two issues are both about Conv2dTranspose, I merge them into one to better test.
In current
tvm.relay.op.strategy.generic.py
, thegroups
of Conv2dTranspose are forced to be oneThis PR aims to enlables groups > 1 for this OP on multiple platforms.