Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

fatal error: cuda_runtime_api.h: No such file or directory #15

Closed
imr555 opened this issue Feb 26, 2019 · 20 comments
Closed

fatal error: cuda_runtime_api.h: No such file or directory #15

imr555 opened this issue Feb 26, 2019 · 20 comments

Comments

@imr555
Copy link

imr555 commented Feb 26, 2019

i set this
export CUDA_HOME="/usr/local/cuda/"
when i run "python setup.py install" i found this error.. how can i solve it ?
!! WARNING !!

warnings.warn(ABI_INCOMPATIBILITY_WARNING.format(compiler))
building 'warprnnt_pytorch.warp_rnnt' extension
gcc -pthread -B /home/imr555/miniconda3/envs/ariyan/compiler_compat -Wl,--sysroot=/ -Wsign-compare -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -fPIC -I/home/imr555/Hasan/Project/RRNT/E2E-ASR-master/warp-transducer/include -I/home/imr555/miniconda3/envs/ariyan/lib/python3.6/site-packages/torch/lib/include -I/home/imr555/miniconda3/envs/ariyan/lib/python3.6/site-packages/torch/lib/include/torch/csrc/api/include -I/home/imr555/miniconda3/envs/ariyan/lib/python3.6/site-packages/torch/lib/include/TH -I/home/imr555/miniconda3/envs/ariyan/lib/python3.6/site-packages/torch/lib/include/THC -I/home/imr555/miniconda3/envs/ariyan/include/python3.6m -c src/binding.cpp -o build/temp.linux-x86_64-3.6/src/binding.o -std=c++11 -fPIC -DWARPRNNT_ENABLE_GPU -DTORCH_API_INCLUDE_EXTENSION_H -DTORCH_EXTENSION_NAME=warp_rnnt -D_GLIBCXX_USE_CXX11_ABI=0
cc1plus: warning: command line option ‘-Wstrict-prototypes’ is valid for C/ObjC but not for C++
In file included from /home/imr555/miniconda3/envs/ariyan/lib/python3.6/site-packages/torch/lib/include/THC/THCGeneral.h:12:0,
from /home/imr555/miniconda3/envs/ariyan/lib/python3.6/site-packages/torch/lib/include/THC/THC.h:4,
from src/binding.cpp:8:
/home/imr555/miniconda3/envs/ariyan/lib/python3.6/site-packages/torch/lib/include/ATen/cuda/CUDAStream.h:6:30: fatal error: cuda_runtime_api.h: No such file or directory
#include "cuda_runtime_api.h"
^
compilation terminated.
error: command 'gcc' failed with exit status 1

@HawkAaron
Copy link
Owner

before run "python setup.py install", you need to

  1. load gcc 5
  2. load cuda library, it should be the same as pytorch

the environments for reference:

export CUDA_HOME=$HOME/tools/cuda-9.0 # change to your path
export CUDA_TOOLKIT_ROOT_DIR=$CUDA_HOME
export LD_LIBRARY_PATH="$CUDA_HOME/extras/CUPTI/lib64:$LD_LIBRARY_PATH"
export LIBRARY_PATH=$CUDA_HOME/lib64:$LIBRARY_PATH
export LD_LIBRARY_PATH=$CUDA_HOME/lib64:$LD_LIBRARY_PATH
export CFLAGS="-I$CUDA_HOME/include $CFLAGS"

@imr555
Copy link
Author

imr555 commented Feb 27, 2019

It's working.. Thank you so much.

@imr555 imr555 closed this as completed Feb 27, 2019
@DemisEom
Copy link

i have same problem...:(

what is that mean "load gcc5, load cuda library"??

@HawkAaron
Copy link
Owner

@shelling203 Compile the extension using gcc4.9 or higher, and load cuda library in your environment before compile with GPU support.

@DemisEom
Copy link

um... I'm sorry I do not understand.
My gcc version is 5.4.0....
Is it right that I have to run something else before I run "python setup.py install"?
or
I have to edit setup code?

@HawkAaron
Copy link
Owner

export CUDA_HOME=$HOME/tools/cuda-9.0 # change to your path
export CUDA_TOOLKIT_ROOT_DIR=$CUDA_HOME
export LD_LIBRARY_PATH="$CUDA_HOME/extras/CUPTI/lib64:$LD_LIBRARY_PATH"
export LIBRARY_PATH=$CUDA_HOME/lib64:$LIBRARY_PATH
export LD_LIBRARY_PATH=$CUDA_HOME/lib64:$LD_LIBRARY_PATH
export CFLAGS="-I$CUDA_HOME/include $CFLAGS"

@shelling203 Please set those environment variables before run python setup.py install.

@DemisEom
Copy link

Thanks for your answer :)
but I already set environment variables and same issue...

@HawkAaron
Copy link
Owner

@shelling203
Could you provide more information ? Such as CUDA version, machine, error info, ...

@triple-tam
Copy link

Setting the environment variables (ie vim ~/.bash_profile and source ~/.bash_profile) worked for me!
Note in case this is helpful for anyone else: Don't specify the cuda version in your path.
e.g. Let CUDA_HOME=/usr/local/cuda (or whatever your path is) instead of using CUDA_HOME=/usr/local/cuda-10.0

@gccyxy
Copy link

gccyxy commented Jul 13, 2019

Thanks for your answer :)
but I already set environment variables and same issue...

I have the same questions with you, I tried it ,but it does not work

@gccyxy
Copy link

gccyxy commented Jul 13, 2019

Setting the environment variables (ie vim ~/.bash_profile and source ~/.bash_profile) worked for me!
Note in case this is helpful for anyone else: Don't specify the cuda version in your path.
e.g. Let CUDA_HOME=/usr/local/cuda (or whatever your path is) instead of using CUDA_HOME=/usr/local/cuda-10.0

I have many cuda.h , How can I select the right CUDA_HOME??

MY cuda.h is On where as followes:
/cm/shared/apps/hwloc/1.11.8/include/hwloc/cuda.h
/cm/shared/apps/tensorflow/1.7.0/lib/python2.7/site-packages/tensorflow/include/tensorflow/core/platform/cuda.h
/cm/shared/apps/openmpi/cuda/64/3.0.0/include/openmpi/opal/mca/hwloc/hwloc1117/hwloc/include/hwloc/cuda.h
/cm/shared/apps/hpcx/2.0.0/ompi-v3.0.x/include/openmpi/opal/mca/hwloc/hwloc1117/hwloc/include/hwloc/cuda.h
/cm/shared/apps/cuda91/toolkit/9.1.85/targets/x86_64-linux/include/cuda.h
/home/jupyter/anaconda3/pkgs/pytorch-1.0.1-cuda92py36h65efead_0/lib/python3.6/site-packages/torch/lib/include/torch/csrc/api/include/torch/cuda.h
/home/jupyter/anaconda3/pkgs/tensorflow-base-1.12.0-gpu_py36h8e0ae2d_0/lib/python3.6/site-packages/tensorflow/include/tensorflow/core/platform/cuda.h
/home/jupyter/anaconda3/pkgs/pytorch-1.0.1-cuda100py36he554f03_0/lib/python3.6/site-packages/torch/lib/include/torch/csrc/api/include/torch/cuda.h
/home/jupyter/anaconda3/pkgs/tensorflow-base-1.12.0-gpu_py36had579c0_0/lib/python3.6/site-packages/tensorflow/include/tensorflow/core/platform/cuda.h
/home/jupyter/anaconda3/lib/python3.6/site-packages/tensorflow/include/tensorflow/core/platform/cuda.h
/home/ai18/Gry/darknet/src/cuda.h
/home/ai18/Gry/darknet2/darknet/src/cuda.h
/home/ai18/Gry/naofinal/darknet_nao/src/cuda.h
/home/ai18/Gry/naofinal/darknet_mobilenet-master/src/cuda.h
/home/ai18/Gry/darknet_Mobile/src/cuda.h
/home/ai18/Gry/darknet_nao/src/cuda.h
/home/ai18/Gry/Gry/darknet/src/cuda.h
/home/ai18/Gry/darknet_mobilenet-master/src/cuda.h
/home/qr/miniconda3/pkgs/tensorflow-base-1.14.0-mkl_py37h7ce6ba3_0/lib/python3.7/site-packages/tensorflow/include/tensorflow/core/platform/cuda.h
/home/qr/miniconda3/lib/python3.7/site-packages/tensorflow/include/tensorflow/core/platform/cuda.h
/home/wbw/.local/lib/python2.7/site-packages/tensorflow/include/tensorflow/core/platform/cuda.h
/home/lwj/.pyenv/versions/3.5.2/lib/python3.5/site-packages/torch/lib/include/torch/csrc/api/include/torch/cuda.h
/home/lwj/.pyenv/versions/3.6.7/lib/python3.6/site-packages/tensorflow/include/tensorflow/core/p

@HawkAaron
Copy link
Owner

@gccyxy God knows how it goes.

Well, here is my suggestion:

  1. install anaconda: https://www.anaconda.com/
  2. create an env: `conda create -n <name you'd like> python=3.6
  3. activate that env: . activate <name you'd like>
  4. install whatever deep learning framework you want: conda install pytorch torchvision cudatoolkit=9.0 -c pytorch
  5. go to the extension path, build everything from scratch

@gccyxy
Copy link

gccyxy commented Jul 13, 2019

@gccyxy God knows how it goes.

Well, here is my suggestion:

  1. install anaconda: https://www.anaconda.com/
  2. create an env: `conda create -n <name you'd like> python=3.6
  3. activate that env: . activate <name you'd like>
  4. install whatever deep learning framework you want: conda install pytorch torchvision cudatoolkit=9.0 -c pytorch
  5. go to the extension path, build everything from scratch

Thanks for you time !!! I am trying it now.
But My GCC version is 7.2 ,is it matter ?

@HawkAaron
Copy link
Owner

@gccyxy
It would be OK.

@zhuhaozh
Copy link

before run "python setup.py install", you need to

  1. load gcc 5
  2. load cuda library, it should be the same as pytorch

the environments for reference:

export CUDA_HOME=$HOME/tools/cuda-9.0 # change to your path
export CUDA_TOOLKIT_ROOT_DIR=$CUDA_HOME
export LD_LIBRARY_PATH="$CUDA_HOME/extras/CUPTI/lib64:$LD_LIBRARY_PATH"
export LIBRARY_PATH=$CUDA_HOME/lib64:$LIBRARY_PATH
export LD_LIBRARY_PATH=$CUDA_HOME/lib64:$LD_LIBRARY_PATH
export CFLAGS="-I$CUDA_HOME/include $CFLAGS"

Solved with this.
The GCC 5 is not necessary, gcc 4.8.5 in my environment, still works

@anonymous-oracle
Copy link

running install
running bdist_egg
running egg_info
writing warprnnt_pytorch.egg-info/PKG-INFO
writing dependency_links to warprnnt_pytorch.egg-info/dependency_links.txt
writing top-level names to warprnnt_pytorch.egg-info/top_level.txt
reading manifest file 'warprnnt_pytorch.egg-info/SOURCES.txt'
writing manifest file 'warprnnt_pytorch.egg-info/SOURCES.txt'
installing library code to build/bdist.linux-x86_64/egg
running install_lib
running build_py
running build_ext
building 'warprnnt_pytorch.warp_rnnt' extension
x86_64-linux-gnu-gcc -pthread -DNDEBUG -g -fwrapv -O2 -Wall -g -fstack-protector-strong -Wformat -Werror=format-security -Wdate-time -D_FORTIFY_SOURCE=2 -fPIC -I/home/suhas/warp-transducer/include -I/usr/local/lib/python3.6/dist-packages/torch/include -I/usr/local/lib/python3.6/dist-packages/torch/include/torch/csrc/api/include -I/usr/local/lib/python3.6/dist-packages/torch/include/TH -I/usr/local/lib/python3.6/dist-packages/torch/include/THC -I/usr/include/python3.6m -c src/binding.cpp -o build/temp.linux-x86_64-3.6/src/binding.o -std=c++11 -fPIC -DWARPRNNT_ENABLE_GPU -DTORCH_API_INCLUDE_EXTENSION_H -DTORCH_EXTENSION_NAME=warp_rnnt -D_GLIBCXX_USE_CXX11_ABI=1
In file included from /usr/local/lib/python3.6/dist-packages/torch/include/THC/THCGeneral.h:7:0,
from /usr/local/lib/python3.6/dist-packages/torch/include/THC/THC.h:4,
from src/binding.cpp:8:
/usr/local/lib/python3.6/dist-packages/torch/include/c10/cuda/CUDAStream.h:6:10: fatal error: cuda_runtime_api.h: No such file or directory
#include <cuda_runtime_api.h>
^~~~~~~~~~~~~~~~~~~~
compilation terminated.
error: command 'x86_64-linux-gnu-gcc' failed with exit status 1

I did all the configurations for the environment variables but couldn't get it to work

Running Ubuntu 18.04 LTS, CUDA-10.1, torch==1.4.0

@aheba
Copy link

aheba commented Mar 27, 2020

It work fine for me with C++17

@SaoYear
Copy link

SaoYear commented Oct 20, 2021

export CUDA_HOME=$HOME/tools/cuda-9.0 # change to your path
export CUDA_TOOLKIT_ROOT_DIR=$CUDA_HOME
export LD_LIBRARY_PATH="$CUDA_HOME/extras/CUPTI/lib64:$LD_LIBRARY_PATH"
export LIBRARY_PATH=$CUDA_HOME/lib64:$LIBRARY_PATH
export LD_LIBRARY_PATH=$CUDA_HOME/lib64:$LD_LIBRARY_PATH
export CFLAGS="-I$CUDA_HOME/include $CFLAGS"

@shelling203 Please set those environment variables before run python setup.py install.

YOUR ARE A LEGEND!!!!!!!!!!!

@Lhx94As
Copy link

Lhx94As commented Apr 19, 2022

before run "python setup.py install", you need to

1. load gcc 5

2. load cuda library, it should be the same as pytorch

the environments for reference:

export CUDA_HOME=$HOME/tools/cuda-9.0 # change to your path
export CUDA_TOOLKIT_ROOT_DIR=$CUDA_HOME
export LD_LIBRARY_PATH="$CUDA_HOME/extras/CUPTI/lib64:$LD_LIBRARY_PATH"
export LIBRARY_PATH=$CUDA_HOME/lib64:$LIBRARY_PATH
export LD_LIBRARY_PATH=$CUDA_HOME/lib64:$LD_LIBRARY_PATH
export CFLAGS="-I$CUDA_HOME/include $CFLAGS"

Works for me! Many thanks!

@LiamLonergan
Copy link

Thanks, this worked for me!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests