Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Build error on Windows: CUDA_ARCHITECTURES is empty for target "cmTC_0c70f" #25

Closed
useronym opened this issue Jan 17, 2022 · 9 comments
Closed

Comments

@useronym
Copy link
Contributor

Visual Studio Community 2022 (MSVC 143)
Cuda 11.6
CMake 3.22.1

Lots of compile errors
CMakeError.log
CMakeOutput.log

Am I using a too-new buildchain?

@useronym
Copy link
Contributor Author

useronym commented Jan 17, 2022

I have tried to set the architecture manually according to that, but it has no effect. I suspect the error message is not completely relevant here, as there's a lot of compile errors (seen in CMakeError.log) when compiling include files:

#$ cicc --microsoft_version=1930 --msvc_target_version=1930 --compiler_bindir "C:/Program Files/Microsoft Visual Studio/2022/Community/VC/Tools/MSVC/14.30.30705/bin/HostX86/x86/../../../../../../.." --sdk_dir "C:/Program Files (x86)/Windows Kits/10/" --display_error_number --orig_src_file_name "CMakeCUDACompilerId.cu" --orig_src_path_name "E:\instant-ngp\build\CMakeFiles\3.22.1\CompilerIdCUDA\CMakeCUDACompilerId.cu" --allow_managed  -arch compute_52 -m64 --no-version-ident -ftz=0 -prec_div=1 -prec_sqrt=1 -fmad=1 --include_file_name "CMakeCUDACompilerId.fatbin.c" -tused --gen_module_id_file --module_id_file_name "tmp/CMakeCUDACompilerId.module_id" --gen_c_file_name "tmp/CMakeCUDACompilerId.cudafe1.c" --stub_file_name "tmp/CMakeCUDACompilerId.cudafe1.stub.c" --gen_device_file_name "tmp/CMakeCUDACompilerId.cudafe1.gpu"  "tmp/CMakeCUDACompilerId.cpp1.ii" -o "tmp/CMakeCUDACompilerId.ptx"
C:\Program Files\Microsoft Visual Studio\2022\Community\VC\Tools\MSVC\14.30.30705\include\vcruntime.h(197): error: invalid redeclaration of type name "size_t"

C:\Program Files\Microsoft Visual Studio\2022\Community\VC\Tools\MSVC\14.30.30705\include\vcruntime_new.h(48): error: first parameter of allocation function must be of type "size_t"

C:\Program Files\Microsoft Visual Studio\2022\Community\VC\Tools\MSVC\14.30.30705\include\vcruntime_new.h(53): error: first parameter of allocation function must be of type "size_t"

C:\Program Files\Microsoft Visual Studio\2022\Community\VC\Tools\MSVC\14.30.30705\include\vcruntime_new.h(59): error: first parameter of allocation function must be of type "size_t"

C:\Program Files\Microsoft Visual Studio\2022\Community\VC\Tools\MSVC\14.30.30705\include\vcruntime_new.h(64): error: first parameter of allocation function must be of type "size_t"

C:\Program Files\Microsoft Visual Studio\2022\Community\VC\Tools\MSVC\14.30.30705\include\vcruntime_new.h(166): error: first parameter of allocation function must be of type "size_t"

C:\Program Files\Microsoft Visual Studio\2022\Community\VC\Tools\MSVC\14.30.30705\include\vcruntime_new.h(181): error: first parameter of allocation function must be of type "size_t"

C:\Program Files\Microsoft Visual Studio\2022\Community\VC\Tools\MSVC\14.30.30705\include\vcruntime_new_debug.h(26): error: first parameter of allocation function must be of type "size_t"

C:\Program Files\Microsoft Visual Studio\2022\Community\VC\Tools\MSVC\14.30.30705\include\vcruntime_new_debug.h(34): error: first parameter of allocation function must be of type "size_t"

C:\Program Files\Microsoft Visual Studio\2022\Community\VC\Tools\MSVC\14.30.30705\include\type_traits(381): error: class template "std::_Is_memfunptr" has already been defined

C:\Program Files\Microsoft Visual Studio\2022\Community\VC\Tools\MSVC\14.30.30705\include\type_traits(381): error: class template "std::_Is_memfunptr" has already been defined

...

Compiling the CUDA compiler identification source file "CMakeCUDACompilerId.cu" failed.

@somisawa
Copy link

I solved the same error by setting CMAKE_CUDA_COMPILER. When running cmake, I ran cmake . -B build -DCMAKE_CUDA_COMPILER=/usr/local/cuda-<your cuda version>/bin/nvcc and then, this was solved in my case.

@useronym
Copy link
Contributor Author

Unfortunately that has no effect either, in fact nvcc is already on my path anyway

@jaymefosa
Copy link

I had this error on Ubuntu 18.04 and although NVCC was in the path, setting it directly as somisawa recommended worked

@somisawa
Copy link

@jaymefosa
That's nice. And if you face other warnings such as GPU not detected, it is also possibility caused by environmental arguments. Then, see Issue #28 .

@useronym
Copy link
Contributor Author

Managed to get it to build. Instead of using the VS2022 developer powershell to run CMake directly, I imported the folder into VS and use the CMake integration. VS managed to do the build without any extra settings. Sounds like maybe there is something wrong with the VS2022 developer shell ¯_(ツ)_/¯

@liuchangf
Copy link

liuchangf commented Jan 24, 2022

I solved the problem by following code
cmake . -B build -DCMAKE_CUDA_COMPILER="C:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v10.2/bin/nvcc.exe" -G"Visual Studio 15 2017 Win64"

@limberc
Copy link

limberc commented Jun 11, 2023

You may installed the nvidia driver and nvcc in some way. For latest version, you can install nvidia driver using the 3rd party softwares. They may not identify you as a programmer and modify the CUDA path to somewhere else.

bioshazard referenced this issue in abetlen/llama-cpp-python Nov 8, 2023
* llava v1.5 integration

* Point llama.cpp to fork

* Add llava shared library target

* Fix type

* Update llama.cpp

* Add llava api

* Revert changes to llama and llama_cpp

* Update llava example

* Add types for new gpt-4-vision-preview api

* Fix typo

* Update llama.cpp

* Update llama_types to match OpenAI v1 API

* Update ChatCompletionFunction type

* Reorder request parameters

* More API type fixes

* Even More Type Updates

* Add parameter for custom chat_handler to Llama class

* Fix circular import

* Convert to absolute imports

* Fix

* Fix pydantic Jsontype bug

* Accept list of prompt tokens in create_completion

* Add llava1.5 chat handler

* Add Multimodal notebook

* Clean up examples

* Add server docs

---------

Co-authored-by: Andrei Betlen <abetlen@gmail.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

6 participants