Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Accidental multi-GPU? #47

Open
HughPH opened this issue Sep 8, 2021 · 1 comment
Open

Accidental multi-GPU? #47

HughPH opened this issue Sep 8, 2021 · 1 comment

Comments

@HughPH
Copy link

HughPH commented Sep 8, 2021

I have a cut of this code from a week or two ago.

Funnily enough I also added the option to run it on another GPU. When I do choose cuda:1, though, I get 2GB allocated on cuda:0 although that device is not specified anywhere in generate.py. Combined with disabling ECC (nvidia-smi -i 1 -e 0) this is Fine, because I can get over 912KibiPixels (1280x720 or 1488x624), but it would be good to understand what, why and how.

@rlallen-nps
Copy link

I have a cut of this code from a week or two ago.

Funnily enough I also added the option to run it on another GPU. When I do choose cuda:1, though, I get 2GB allocated on cuda:0 although that device is not specified anywhere in generate.py. Combined with disabling ECC (nvidia-smi -i 1 -e 0) this is Fine, because I can get over 912KibiPixels (1280x720 or 1488x624), but it would be good to understand what, why and how.

When reading up on torch's dataparallel I saw several mention a similar issue. May want to start there.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants