Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Torch, Quantization] Necessary workaround to prepare for 1.6 update #6602

Merged
merged 4 commits into from
Oct 16, 2020

Conversation

masahi
Copy link
Member

@masahi masahi commented Sep 30, 2020

A part of #6594

This the workaround for the state_dict bug introduced in 1.6. With this fix, I verified locally that I can run our quantization tests on 1.6. We can now support quantized models from v1.4 to v1.6.

please review @siju-samuel @t-vi @anijain2305

@tqchen tqchen changed the base branch from master to main October 11, 2020 18:18
@masahi
Copy link
Member Author

masahi commented Oct 15, 2020

@siju-samuel @anijain2305 can you merge this? It is a prereq for upgrading our CI to the latest pytorch version

Copy link
Member

@siju-samuel siju-samuel left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM, a few nits

python/tvm/relay/frontend/qnn_torch.py Outdated Show resolved Hide resolved
python/tvm/relay/frontend/qnn_torch.py Outdated Show resolved Hide resolved
python/tvm/relay/frontend/qnn_torch.py Outdated Show resolved Hide resolved
@masahi
Copy link
Member Author

masahi commented Oct 16, 2020

thanks @siju-samuel
I added pytorch_utils.py and moved the version check function there

Copy link
Member

@siju-samuel siju-samuel left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@masahi masahi merged commit cf49e8b into apache:main Oct 16, 2020
@masahi
Copy link
Member Author

masahi commented Oct 16, 2020

Thanks @siju-samuel

trevor-m pushed a commit to trevor-m/tvm that referenced this pull request Oct 29, 2020
…pache#6602)

* add support for 1.6 quantized models

* fix lint

* move version check function to a common utils

* fix lint

Co-authored-by: masa <masa@pop-os.localdomain>
trevor-m pushed a commit to trevor-m/tvm that referenced this pull request Dec 2, 2020
…pache#6602)

* add support for 1.6 quantized models

* fix lint

* move version check function to a common utils

* fix lint

Co-authored-by: masa <masa@pop-os.localdomain>
trevor-m pushed a commit to trevor-m/tvm that referenced this pull request Dec 4, 2020
…pache#6602)

* add support for 1.6 quantized models

* fix lint

* move version check function to a common utils

* fix lint

Co-authored-by: masa <masa@pop-os.localdomain>
trevor-m pushed a commit to neo-ai/tvm that referenced this pull request Dec 4, 2020
…pache#6602)

* add support for 1.6 quantized models

* fix lint

* move version check function to a common utils

* fix lint

Co-authored-by: masa <masa@pop-os.localdomain>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants