-
-
Notifications
You must be signed in to change notification settings - Fork 34
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Use _py-xgboost-mutex to differentiate libxgboost CPU vs GPU #23
Comments
Ack. I thought we had this right. |
So conda does not ship the GPU variant. We do depend on the mutex package to make sure the env is consistent. https://github.com/conda-forge/xgboost-feedstock/blob/master/recipe/meta.yaml#L70 Do you have an example of how this is going wrong? |
OK. So we cannot do this because the R package also depends on libxgboost. I am confused as to how you got an env to this state. You should not install libxgboost directly but instead you need to install one of the frontend packages that have the mutex dependencies. |
I see that a user could install |
Installing libxgboost directly is not how the package is supposed to be used, but yes of course people can do what they please. I wonder if we should be separating the CLI from the lib package? Then we could use the mutex as it is supposed to work. |
Yes, separating the CLI from |
Happy to chat! I am going to leave this issue open for now. |
Would it make sense to have a single mutex for all of these packages as they will all be dependent on |
When you say "single mutex," what do you mean? |
Sorry a single package to select GPU or CPU builds. Right now I see |
Ah ok. Does having a CPU python version installed in the env mean the GPU one cannot be? I don't know the internals well enough to understand the constraints here. |
My guess is all packages using |
None of the Python code is different, the binary compiled for GPU use can safely be used on a system without a GPU. It will only fail if a user tries to call some GPU function and the cuda runtime does not exist. |
so we can just compile in gpu support for everything? why did we have the mutex in the first place? I copied it out of the main anaconda channels. |
Do you happen to remember the author of these changes (to the recipe)? Maybe we can ask them? |
looks like it was @jjhelmus and/or @mingwandroid |
There are a few things I'd like to add here -
While I had opened this issue, the idea was very close to what @jakirkham brought up here wrt to R. But my request was to include the dependency of xgboost-feedstock/recipe/meta.yaml Lines 36 to 49 in 521defb
And as @jakirkham brought up I am supportive of the idea to have a single mutex |
If the gpu xgboost works just as well without gpus as the non-gpu xgboost then there's no good enough reason for the cpu variant to exist. Let's discard all idea of mutexes then? |
Personally would be fine with either keeping one mutex or dropping. It sounds to me like @ksangeek's concerns are basically about total install size, which seems like a valid concern to me (note: this does come up in other contexts). Do we have a sense of what the size difference is now? |
Have added PR ( #35 ) to try and simplify CPU/GPU selection based on the discussion here. Please take a look and share your thoughts. 🙂 |
We've since added GPU/CPU packages here. There may very well still be work needed, but please try it out and let us know |
…tream_3 Merge `conda-forge/main` into `rapidsai/main` (pt. 3)
@jakirkham Can we close this out now? |
Yes this is done If we want to make other changes, we should start a new issue discussing those |
I see that
libxgboost
conda package also provides the CLI forxgboost
, but I don't see a method for a user to select a CPU or a GPU variant of this.Should there be a meta-packages named
libxgboost-cpu
andlibxgboost-gpu
to help users selectively install the variant they desire?And also maybe
libxgboost
should have a dependency on the selection package_py-xgboost-mutex
so that the CPU and the GPU variants are not wrongly installed?@jjhelmus Pulling you in here for comments, based on your comments in #14
The text was updated successfully, but these errors were encountered: