Skip to content

Commit

Permalink
Better grouped convolution for CPU targets (apache#6137)
Browse files Browse the repository at this point in the history
* integrated with v0.8

* Rebase, and undoing accidental removal of auto scheduler NHWC support

* Added ASF license header

* Minor bug fixes

* Added asymmetric padding support
Fixed linting

* Improve linting

* Better linting, disable final linting checks

* Fixed final linting errors (figured out how to run lint tests locally)

* fixing linter formatting part 1

* fixing linter formatting part 2

* fixing linter formatting part 3

* Update conv2d.py

Fixed merge issue

* Rebase, and update responding to some comments

* Fixed AutoScheduler bug for NHWC case

* removed infer_pad from GSPC

* Rebase, and undoing accidental removal of auto scheduler NHWC support

* Added ASF license header

* Minor bug fixes

* Added asymmetric padding support
Fixed linting

* Improve linting

* Better linting, disable final linting checks

* Fixed final linting errors (figured out how to run lint tests locally)

* Update conv2d.py

Fixed merge issue

* Rebase, and update responding to some comments

* Fixed AutoScheduler bug for NHWC case

* Minor fix

* Fixed removal of infer_pad to no padding

* Fixed unexpected linting error

Co-authored-by: Perry Gibson <Perry.Gibson@glasgow.ac.uk>
  • Loading branch information
2 people authored and trevor-m committed May 11, 2021
1 parent 16bef49 commit 5d544e4
Show file tree
Hide file tree
Showing 6 changed files with 749 additions and 9 deletions.
7 changes: 3 additions & 4 deletions python/tvm/relay/op/strategy/arm_cpu.py
Original file line number Diff line number Diff line change
Expand Up @@ -207,11 +207,10 @@ def conv2d_strategy_arm_cpu(attrs, inputs, out_type, target):
else: # group_conv2d
if layout == "NCHW":
assert kernel_layout == "OIHW"
logger.warning("group_conv2d with layout NCHW is not optimized for arm cpu.")
strategy.add_implementation(
wrap_compute_conv2d(topi.nn.group_conv2d_nchw, has_groups=True),
wrap_topi_schedule(topi.generic.schedule_group_conv2d_nchw),
name="group_conv2d_nchw.generic",
wrap_compute_conv2d(topi.arm_cpu.group_conv2d_nchw, has_groups=True),
wrap_topi_schedule(topi.arm_cpu.schedule_group_conv2d_nchw),
name="group_conv2d_nchw.arm_cpu",
)
elif layout == "NHWC":
assert kernel_layout == "HWIO"
Expand Down
8 changes: 3 additions & 5 deletions python/tvm/relay/op/strategy/x86.py
Original file line number Diff line number Diff line change
Expand Up @@ -205,12 +205,10 @@ def conv2d_strategy_cpu(attrs, inputs, out_type, target):
else: # group_conv2d
if layout == "NCHW":
assert kernel_layout == "OIHW"
if not is_auto_scheduler_enabled():
logger.warning("group_conv2d is not optimized for x86 with autotvm.")
strategy.add_implementation(
wrap_compute_conv2d(topi.nn.group_conv2d_nchw, has_groups=True),
wrap_topi_schedule(topi.generic.schedule_group_conv2d_nchw),
name="group_conv2d_nchw.generic",
wrap_compute_conv2d(topi.x86.group_conv2d_nchw, has_groups=True),
wrap_topi_schedule(topi.x86.schedule_group_conv2d_nchw),
name="group_conv2d_nchw.x86",
)
elif layout == "NHWC":
assert kernel_layout == "HWIO"
Expand Down
1 change: 1 addition & 0 deletions python/tvm/topi/arm_cpu/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -26,3 +26,4 @@
from .bitserial_dense import *
from .injective import *
from . import cortex_m7
from .group_conv2d import *
Loading

0 comments on commit 5d544e4

Please sign in to comment.