Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[RELAY] Refactor FoldConstant to skip TNonComputationalOps #6720

Merged
merged 12 commits into from
Oct 24, 2020

Conversation

electriclilies
Copy link
Contributor

This PR refactors FoldConstant to skip TNonComputational ops, instead of explicitly checking for those ops. It also makes all QNN ops TNonComputational. This allows the constant folder to be applied to graphs containing QNN ops.

@jroesch @jwfromm please take a look!

@jwfromm
Copy link
Contributor

jwfromm commented Oct 20, 2020

LGTM, thanks!

@electriclilies electriclilies marked this pull request as draft October 21, 2020 03:32
@electriclilies electriclilies marked this pull request as ready for review October 21, 2020 19:42
@jwfromm
Copy link
Contributor

jwfromm commented Oct 24, 2020

@junrushao1994 now that this is passing CI, can you take a final look and merge this?

@jroesch jroesch merged commit 372d737 into apache:main Oct 24, 2020
@junrushao
Copy link
Member

Sorry for being late. It looks good! Thanks!

trevor-m pushed a commit to trevor-m/tvm that referenced this pull request Oct 29, 2020
* add TNonComputational to qnn ops and change FoldConstant

* remove comments

* check if op in nonComputational map

* forgot to mark device_copy op as TNonComputational

* hacky fix to fuseops pass

* fix typo

* manually skip device_copy in fold_constant

* Update src/relay/transforms/fold_constant.cc

Co-authored-by: Junru Shao <junrushao1994@gmail.com>

Co-authored-by: Junru Shao <junrushao1994@gmail.com>
trevor-m pushed a commit to trevor-m/tvm that referenced this pull request Dec 2, 2020
* add TNonComputational to qnn ops and change FoldConstant

* remove comments

* check if op in nonComputational map

* forgot to mark device_copy op as TNonComputational

* hacky fix to fuseops pass

* fix typo

* manually skip device_copy in fold_constant

* Update src/relay/transforms/fold_constant.cc

Co-authored-by: Junru Shao <junrushao1994@gmail.com>

Co-authored-by: Junru Shao <junrushao1994@gmail.com>
trevor-m pushed a commit to trevor-m/tvm that referenced this pull request Dec 4, 2020
* add TNonComputational to qnn ops and change FoldConstant

* remove comments

* check if op in nonComputational map

* forgot to mark device_copy op as TNonComputational

* hacky fix to fuseops pass

* fix typo

* manually skip device_copy in fold_constant

* Update src/relay/transforms/fold_constant.cc

Co-authored-by: Junru Shao <junrushao1994@gmail.com>

Co-authored-by: Junru Shao <junrushao1994@gmail.com>
trevor-m pushed a commit to neo-ai/tvm that referenced this pull request Dec 4, 2020
* add TNonComputational to qnn ops and change FoldConstant

* remove comments

* check if op in nonComputational map

* forgot to mark device_copy op as TNonComputational

* hacky fix to fuseops pass

* fix typo

* manually skip device_copy in fold_constant

* Update src/relay/transforms/fold_constant.cc

Co-authored-by: Junru Shao <junrushao1994@gmail.com>

Co-authored-by: Junru Shao <junrushao1994@gmail.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants