Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

🐞 Add device flag #601

Merged
merged 9 commits into from
Nov 7, 2022
Merged

🐞 Add device flag #601

merged 9 commits into from
Nov 7, 2022

Conversation

ashwinvaidya17
Copy link
Collaborator

Description

Changes

  • Bug fix (non-breaking change which fixes an issue)
  • Refactor (non-breaking change which refactors the code base)
  • New feature (non-breaking change which adds functionality)
  • Breaking change (fix or feature that would cause existing functionality to not work as expected)
  • This change requires a documentation update

Checklist

  • My code follows the pre-commit style and check guidelines of this project.
  • I have performed a self-review of my code
  • I have commented my code, particularly in hard-to-understand areas
  • I have made corresponding changes to the documentation
  • My changes generate no new warnings
  • I have added tests that prove my fix is effective or that my feature works
  • New and existing tests pass locally with my changes

Copy link
Contributor

@samet-akcay samet-akcay left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Only a single comment...

@github-actions github-actions bot added the Tests label Sep 30, 2022
@samet-akcay
Copy link
Contributor

@ashwinvaidya17, #600 points out that device didn't make any impact on the inference. Would you be able to check the inference speed with and without device set to gpu?

@@ -84,7 +104,7 @@ def load_model(self, path: Union[str, Path]) -> AnomalyModule:
model = get_model(self.config)
model.load_state_dict(torch.load(path)["state_dict"])
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Don't we need to pass the map_location here? It might fail otherwise if we load a gpu model on cpu.
https://pytorch.org/tutorials/recipes/recipes/save_load_across_devices.html

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good point

@ashwinvaidya17
Copy link
Collaborator Author

I am now getting the same FPS from torch_inference script (without visualization) and benchmarking script. I guess this is ready to be merged.

@samet-akcay samet-akcay merged commit 74595c9 into main Nov 7, 2022
@samet-akcay samet-akcay deleted the ashwin/torch_inference_gpu branch November 7, 2022 15:16
@laogonggong847 laogonggong847 mentioned this pull request May 6, 2023
1 task
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

ONNX inference and TENSORRT optimisation
3 participants