Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Headers are only in $PREFIX/targets folder but aren't in $PREFIX/include #11

Closed
1 task done
kkraus14 opened this issue Apr 15, 2023 · 14 comments · Fixed by conda-forge/cuda-nvcc-feedstock#19
Closed
1 task done
Labels
bug Something isn't working

Comments

@kkraus14
Copy link

Solution to issue cannot be found in the documentation.

  • I checked the documentation.

Issue

In the libcublas-dev package I was expecting the headers to either be added or symlinked in the $PREFIX/include folder, but they're only in the $PREFIX/targets folder. Compilers won't search here by default as far as I know so this should probably be corrected.

Installed packages

n/a

Environment info

n/a
@kkraus14 kkraus14 added the bug Something isn't working label Apr 15, 2023
@adibbley
Copy link
Contributor

Copying response from: conda-forge/staged-recipes#22127 (comment)

We held off on putting symlinks in $PREFIX/include for all packages. IIRC there were concerns with the generic naming of headers, such as math_functions.h and others. Using nvcc will include the targets directory by default.

If there is a use case for other compilers to be used here we can add the symlinks to $PREFIX/include.

@leofang
Copy link
Member

leofang commented Apr 17, 2023

FWIW even with the pre-CUDA 12 CF infra today these headers do not exist in $PREFIX/include, but only in /usr/local/cuda/include of the CUDA docker images, so this would be a new feature, not a blocker/breaking change/bug.

@leofang
Copy link
Member

leofang commented Apr 18, 2023

With conda-forge/cupy-feedstock#199, I was wrong. It seems to be a blocking bug to certain packages that knew to look for headers in /usr/local/cuda/include with a host compiler. This use case is now doomed.

@jakirkham
Copy link
Member

jakirkham commented Apr 18, 2023

Given there is still work needed for cuda-nvcc ( conda-forge/cuda-nvcc-feedstock#1 ), maybe let's revisit the cupy build after that is fixed. It's likely we are just running into a known issue there

@leofang
Copy link
Member

leofang commented Apr 18, 2023

I would think the fix for cuda-nvcc is likely independent. We either need a way to tell the host compiler (outside of nvcc) where the headers are, or patch the build system, or make a symlink so that the headers can be find in $PREFIX/<blahblahblah>/include, or all of above. But I could be very wrong, as the night is late 🙂

@jakirkham
Copy link
Member

FWIW am using that issue as sort of a catch all for remaining compiler work

Anyways agree with following up tomorrow. We already got a lot of hard work done! 😀

@carterbox
Copy link
Member

I have also run into this issue with conda-forge/libmagma-feedstock#4.

@jakirkham
Copy link
Member

We add these headers to host compiler's include path (nvcc already knows where to look)

Would be interested to know what is happening when they are not getting picked up

@kkraus14
Copy link
Author

kkraus14 commented Jun 7, 2023

We add these headers to host compiler's include path (nvcc already knows where to look)

Would be interested to know what is happening when they are not getting picked up

Is that path added for the host compilers if someone isn't using nvcc at all? I.E. it's 100% valid to have a .cpp file that uses cublas via it's host APIs and as far as I can tell gcc / g++ won't find the headers here?

@jakirkham
Copy link
Member

Currently there's not an easy way to use the CUDA packages without also having the CUDA compiler installed. This would be a reasonable issue to file on the cuda-nvcc feedstock (since that is where the logic to add these paths lives now)

@kkraus14
Copy link
Author

kkraus14 commented Jun 7, 2023

Even with the CUDA compiler installed, say someone uses gcc or g++ directly for .c or .cpp files, which is a common practice, it won't be found.

This isn't something that should be fixed on the cuda-nvcc feedstock as it eithers needs to be handled in the compiler feedstocks (which doesn't feel right at all...) or needs to be handled in the package feedstocks in some way.

EDIT: Missed this line: https://github.com/conda-forge/cuda-nvcc-feedstock/blob/deab0342455db4462ef55f487a7b8dd865760e6a/recipe/activate.sh#L26

We may want to consider moving some kind of logic like this into the individual package feedstocks so that it can influence the host compilers even if the cuda-nvcc package isn't installed.

@jakirkham
Copy link
Member

Only suggestion is to raise a cuda-nvcc issue. Happy to discuss in that issue once open 🙂

@kkraus14
Copy link
Author

kkraus14 commented Jun 7, 2023

This isn't a cuda-nvcc issue though. That package is already overstepping its boundaries and setting flags for other compilers unrelated to itself.

Any reason we can't use this issue to continue discussing?

@jakirkham
Copy link
Member

jakirkham commented Jun 7, 2023

Because libcublas-dev neither sets the flags (nor would it be the place to do so)

Currently the flags are in cuda-nvcc. So that's at least one place that is reasonable to have the issue (since we are discussing dropping them and moving them elsewhere).

Perhaps another place an issue could be is in the compiler stack somewhere. As exactly where this issue would go was unclear, thought cuda-nvcc would be a good starting point

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

Successfully merging a pull request may close this issue.

5 participants