Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Building Flash Attention from source: Fatal error: cusolverDn.h: No such file or directory #280

Open
mcale6 opened this issue Jun 19, 2023 · 1 comment

Comments

@mcale6
Copy link

mcale6 commented Jun 19, 2023

I think this is related to https://discuss.pytorch.org/t/not-able-to-include-cusolverdn-h/169122, microsoft/DeepSpeed#2684

Im trying the install flash attention following this: https://github.com/Rappsilber-Laboratory/AlphaLink2
I also tried also to install flash-attention from this repo, always get the same error. Using pip i get the subprocess error mentioned: 279

I made sure that the nivida and pytorch version match. I tested on a HPC, Local Computer and on the Cloud with a gpu and I get the same error. I tried to export the path as suggested in the link mentioned above, but it does not work.

../condaenvs/alphalink/lib/python3.10/site-packages/torch/include/ATen/cuda/CUDAContext.h:10:10: fatal error: cusolverDn.h: No such file or directory
   10 | #include <cusolverDn.h>
      |          ^~~~~~~~~~~~~~
compilation terminated.
error: command '/apps/opt/spack/linux-ubuntu20.04-x86_64/gcc-9.3.0/gcc-9.3.0-snvoz3owsjqmblhxyznvhfd75e4uvi6h/bin/gcc' failed with exit code 1
@tridao
Copy link
Contributor

tridao commented Jun 19, 2023

Yeah idk what's wrong, I've never run into such error. We recommend the Pytorch container from Nvidia, which has all the required tools to install FlashAttention.
Or if you're on torch 2.0+, FlashAttention is available as part of torch.nn.functional.scaled_dot_product_attention.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants