Use CUdA and CudNN with Matlab

30 vues (au cours des 30 derniers jours)
Thierry Gallopin
Thierry Gallopin le 11 Avr 2018
Commenté : Nithin M le 30 Mai 2022
Hi,
I have installed Cuda and CudNN on Ubuntu 14.04. On my terminal the command 'nvcc --version' returns:
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2016 NVIDIA Corporation
Built on Tue_Jan_10_13:22:03_CST_2017
Cuda compilation tools, release 8.0, V8.0.61
But when I try on Matlab R2018a:
results = coder.checkGpuInstall('full')
It returns:
Host CUDA Environment : FAILED (The simple NVCC command 'nvcc --version' failed to execute successfully. GPU Code will not be able to be compiled. Ensure that the 'nvcc' program is installed with the CUDA SDK.)
Runtime : FAILED (No Runtime library can be found. Ensure that the libraries are installed with the CUDA SDK.)
cuFFT : FAILED (No cuFFT library can be found. Ensure that the libraries are installed with the CUDA SDK.)
cuSOLVER : FAILED (No cuSOLVER library can be found. Ensure that the libraries are installed with the CUDA SDK.)
cuBLAS : FAILED (No cuBLAS library can be found. Ensure that the libraries are installed with the CUDA SDK.)
------------------------------------------------------------------------
nvcc -c -rdc=true -Xcompiler -fPIC,-ansi,-fexceptions,-fno-omit-frame-pointer,-pthread -Xcudafe "--diag_suppress=unsigned_compare_with_zero --diag_suppress=useless_type_qualifier_on_return_type" -D_GNU_SOURCE -DMATLAB_MEX_FILE -Wno-deprecated-declarations -arch sm_35 -I "/usr/local/MATLAB/R2018a/simulink/include" -I "/usr/local/MATLAB/R2018a/toolbox/shared/simtargets" -I "/tmp/tpe67dcabf_65c7_44ce_b747_7199284cf224/codegen/mex/gpuSimpleTest" -I "/tmp/tpe67dcabf_65c7_44ce_b747_7199284cf224" -I "./interface" -I "/usr/local/MATLAB/R2018a/extern/include" -I "." "gpuSimpleTest_data.cu"
/bin/sh: 1: nvcc: not found
gmake: *** [gpuSimpleTest_data.o] Error 127
------------------------------------------------------------------------
??? Build error: C compiler produced errors. See the Build Log for further details.
Code generation failed: View Error Report
Code Generation : FAILED (Test GPU code generation failed with the following error 'emlc:compilationError'.)
Compatible GPU : FAILED (The compute capability '3.0' of the selected GPU '1' is not supported by GPU Coder. Generated GPU MEX execution will not be available.)
cuDNN Environment : FAILED (A 'NVIDIA_CUDNN' environment variable was not found. Set 'NVIDIA_CUDNN' to point to the root directory of a NVIDIA cuDNN installation.)
Jetson TK1 Environment : FAILED (A 'NVIDIA_CUDA_TK1' environment variable was not found. This environment variable needs to be set for cross-compilation.)
Jetson TX1 Environment : FAILED (A 'NVIDIA_CUDA_TX1' environment variable was not found. This environment variable needs to be set for cross-compilation.)
Jetson TX2 Environment : FAILED (A 'NVIDIA_CUDA_TX2' environment variable was not found. This environment variable needs to be set for cross-compilation.)
results =
struct with fields:
host: 0
tk1: 0
tx1: 0
tx2: 0
gpu: 0
codegen: 0
codeexec: 0
cudnn: 0
Do you know how to make NIVIDIA softwares compatible with Matlab2018a ?

Réponses (4)

Joss Knight
Joss Knight le 15 Avr 2018
Check some of these environment variables to make sure they're pointing to the right place.
  1 commentaire
Pietro Cicuta
Pietro Cicuta le 10 Avr 2019
Tried but failing! posted a detailed continuation of thread...

Connectez-vous pour commenter.


Pietro Cicuta
Pietro Cicuta le 10 Avr 2019
Hi - I have a similar problem. cuda-10.0 istalled, and cudnn 7.5 installed in Ubuntu 18.04. The two seem to work fine from the terminal. But in Matlab I am failing to get the cuDNN environment to be set successfully. I have the cudnn.h in usr/include.... this does not work.... I have the cudnn_v7.h in usr/include/x86_64-linux-gnu this also does not work, I get the error from coder.checkGpuInstall('full') that unable to find cuDNN header files.
This has cost me already hours of random searching and tweaking... has anyone seen this and solved it? Maybe I'm doing something trivially stupid.
  4 commentaires
Pietro Cicuta
Pietro Cicuta le 26 Avr 2019
Come on Mathworks please fix this! My CUDA and CUDNN work fine from the linux command line! Seems like a bug in the setenv function.
Nithin M
Nithin M le 30 Mai 2022
Hi,
Got the same problem but solved it. May be helpful to someone and hence posting
your cuda installation would be mostly in usr/local/cuda-version
You need to give this path in setenv
eg: setenv('NVIDIA_CUDNN,'/usr/local/cuda-11');

Connectez-vous pour commenter.


Miguel Medellin
Miguel Medellin le 1 Août 2019
I had a different but somewhat related problem to the one discussed here.
It would help if you could describe what the directory structure of your CUDA/CuDNN install looks like.
For example, my CUDA directory is located in /usr/local/cuda and it has this kind of directory structure:
ls /usr/local/cuda
LICENSE NVIDIA_SLA_cuDNN_Support.txt README bin doc extras include lib64 nvml nvvm share src targets tools version.txt
In my case, I would have run setenv('NVIDIA_CUDNN', '/usr/local/cuda') and it should have found the lib64 and include folders it assumes are present in that directory.
If you don't have that directory structure, it will probably fail to find it.
The reason it is looking for the lib64 folder is because it looking for the shared object library files:
ls /usr/local/cuda/lib64
libOpenCL.so libcudnn.so.7 libcusolver.so libnppicc_static.a libnppim.so.9.0.176 libnvToolsExt.so.1
libOpenCL.so.1 libcudnn.so.7.6.0 libcusolver.so.9.0 libnppicom.so libnppim_static.a libnvToolsExt.so.1.0.0
libOpenCL.so.1.0 libcudnn_static.a libcusolver.so.9.0.176 libnppicom.so.9.0 libnppist.so libnvblas.so
libOpenCL.so.1.0.0 libcufft.so libcusolver_static.a libnppicom.so.9.0.176 libnppist.so.9.0 libnvblas.so.9.0
libaccinj64.so libcufft.so.9.0 libcusparse.so libnppicom_static.a libnppist.so.9.0.176 libnvblas.so.9.0.176
libaccinj64.so.9.0 libcufft.so.9.0.176 libcusparse.so.9.0 libnppidei.so libnppist_static.a libnvblas.so.9.0.480
libaccinj64.so.9.0.176 libcufft_static.a libcusparse.so.9.0.176 libnppidei.so.9.0 libnppisu.so libnvgraph.so
libcublas.so libcufftw.so libcusparse_static.a libnppidei.so.9.0.176 libnppisu.so.9.0 libnvgraph.so.9.0
libcublas.so.9.0 libcufftw.so.9.0 libnppc.so libnppidei_static.a libnppisu.so.9.0.176 libnvgraph.so.9.0.176
libcublas.so.9.0.176 libcufftw.so.9.0.176 libnppc.so.9.0 libnppif.so libnppisu_static.a libnvgraph_static.a
libcublas.so.9.0.480 libcufftw_static.a libnppc.so.9.0.176 libnppif.so.9.0 libnppitc.so libnvrtc-builtins.so
libcublas_device.a libcuinj64.so libnppc_static.a libnppif.so.9.0.176 libnppitc.so.9.0 libnvrtc-builtins.so.9.0
libcublas_static.a libcuinj64.so.9.0 libnppial.so libnppif_static.a libnppitc.so.9.0.176 libnvrtc-builtins.so.9.0.176
libcudadevrt.a libcuinj64.so.9.0.176 libnppial.so.9.0 libnppig.so libnppitc_static.a libnvrtc.so
libcudart.so libculibos.a libnppial.so.9.0.176 libnppig.so.9.0 libnpps.so libnvrtc.so.9.0
libcudart.so.9.0 libcurand.so libnppial_static.a libnppig.so.9.0.176 libnpps.so.9.0 libnvrtc.so.9.0.176
libcudart.so.9.0.176 libcurand.so.9.0 libnppicc.so libnppig_static.a libnpps.so.9.0.176 stubs
libcudart_static.a libcurand.so.9.0.176 libnppicc.so.9.0 libnppim.so libnpps_static.a
libcudnn.so libcurand_static.a libnppicc.so.9.0.176 libnppim.so.9.0 libnvToolsExt.so
The reason it is looking for the include folder is because it assumes cudnn.h among other things is in there:
ls /usr/local/cuda/include
CL curand_kernel.h host_config.h nvToolsExtSync.h
builtin_types.h curand_lognormal.h host_defines.h nvblas.h
channel_descriptor.h curand_mrg32k3a.h library_types.h nvfunctional
common_functions.h curand_mtgp32.h math_constants.h nvgraph.h
cooperative_groups.h curand_mtgp32_host.h math_functions.h nvml.h
cooperative_groups_helpers.h curand_mtgp32_kernel.h math_functions.hpp nvrtc.h
crt curand_mtgp32dc_p_11213.h math_functions_dbl_ptx3.h sm_20_atomic_functions.h
cuComplex.h curand_normal.h math_functions_dbl_ptx3.hpp sm_20_atomic_functions.hpp
cublas.h curand_normal_static.h mma.h sm_20_intrinsics.h
cublasXt.h curand_philox4x32_x.h npp.h sm_20_intrinsics.hpp
cublas_api.h curand_poisson.h nppcore.h sm_30_intrinsics.h
cublas_v2.h curand_precalc.h nppdefs.h sm_30_intrinsics.hpp
cuda.h curand_uniform.h nppi.h sm_32_atomic_functions.h
cudaEGL.h cusolverDn.h nppi_arithmetic_and_logical_operations.h sm_32_atomic_functions.hpp
cudaGL.h cusolverRf.h nppi_color_conversion.h sm_32_intrinsics.h
cudaProfiler.h cusolverSp.h nppi_compression_functions.h sm_32_intrinsics.hpp
cudaVDPAU.h cusolverSp_LOWLEVEL_PREVIEW.h nppi_computer_vision.h sm_35_atomic_functions.h
cuda_device_runtime_api.h cusolver_common.h nppi_data_exchange_and_initialization.h sm_35_intrinsics.h
cuda_fp16.h cusparse.h nppi_filtering_functions.h sm_60_atomic_functions.h
cuda_fp16.hpp cusparse_v2.h nppi_geometry_transforms.h sm_60_atomic_functions.hpp
cuda_gl_interop.h device_atomic_functions.h nppi_linear_transforms.h sm_61_intrinsics.h
cuda_occupancy.h device_atomic_functions.hpp nppi_morphological_operations.h sm_61_intrinsics.hpp
cuda_profiler_api.h device_double_functions.h nppi_statistics_functions.h sobol_direction_vectors.h
cuda_runtime.h device_double_functions.hpp nppi_support_functions.h surface_functions.h
cuda_runtime_api.h device_functions.h nppi_threshold_and_compare_operations.h surface_functions.hpp
cuda_surface_types.h device_functions.hpp npps.h surface_indirect_functions.h
cuda_texture_types.h device_functions_decls.h npps_arithmetic_and_logical_operations.h surface_indirect_functions.hpp
cuda_vdpau_interop.h device_launch_parameters.h npps_conversion_functions.h surface_types.h
cudalibxt.h device_types.h npps_filtering_functions.h texture_fetch_functions.h
cudnn.h driver_functions.h npps_initialization.h texture_fetch_functions.hpp
cufft.h driver_types.h npps_statistics_functions.h texture_indirect_functions.h
cufftXt.h dynlink_cuda.h npps_support_functions.h texture_indirect_functions.hpp
cufftw.h dynlink_cuda_cuda.h nppversion.h texture_types.h
curand.h dynlink_cuviddec.h nvToolsExt.h thrust
curand_discrete.h dynlink_nvcuvid.h nvToolsExtCuda.h vector_functions.h
curand_discrete2.h fatBinaryCtl.h nvToolsExtCudaRt.h vector_functions.hpp
curand_globals.h fatbinary.h nvToolsExtMeta.h vector_types.h
If your directory does not look like that at all - it might mean you installed CUDA/CuDNN in some unforseen way - it may require reinstalling it in some more conventional manner.
You say you don't have a lib64 folder - do you have a lib folder or lib32 folder instead? If you do, I would try and figure out why - but you might be able to try create the folder manually and copying the contents of your lib / lib32 folder and placing it there just to see if it works. If it doesn't, it seems to suggest there is something unconventional/incorrect about your installation.

nabila elsawy
nabila elsawy le 30 Oct 2019
Modifié(e) : nabila elsawy le 30 Oct 2019
Hi,
i have same problem with cudnn header
can you help me?
>> coder.checkGpuInstall('full')
Host CUDA Environment : PASSED
Runtime : PASSED
cuFFT : PASSED
cuSOLVER : PASSED
cuBLAS : PASSED
Code Generation : PASSED
Compatible GPU : PASSED
Code Execution : PASSED
cuDNN Environment : FAILED (No cuDNN header could be found in directory "C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\cuDNN\"\include. Ensure that the cuDNN headers are installed with the specified cuDNN SDK.)
Jetson TK1 Environment : FAILED (Jetson Cross-compilation is not supported on this platform. It is only supported on Linux operating systems.)
Jetson TX1 Environment : FAILED (Jetson Cross-compilation is not supported on this platform. It is only supported on Linux operating systems.)
Jetson TX2 Environment : FAILED (Jetson Cross-compilation is not supported on this platform. It is only supported on Linux operating systems.)
ans =
struct with fields:
host: 1
tk1: 0
tx1: 0
tx2: 0
gpu: 1
codegen: 1
codeexec: 1
cudnn: 0
  1 commentaire
Miguel Medellin
Miguel Medellin le 30 Oct 2019
Modifié(e) : Miguel Medellin le 30 Oct 2019
I would double check your path since that is what the error is complaining about.
First verify that location is in fact correct (you should see a bunch of .h/.hpp files in the 'include' sub folder and perhaps some other folders)
If the directory exists, then double check how you are passing the directory to MATLAB - I notice in the output it says:
FAILED (No cuDNN header could be found in directory "C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\cuDNN\ " \include.
It might the the double quotation mark between 'cuDNN\' and '\include' is throwing off how MATLAB searches for the path. Perhaps try using single quotes? (If I'm not mistaken the 'include' folder is appended to the path you provided which is why it appears like this)

Connectez-vous pour commenter.

Catégories

En savoir plus sur Get Started with GPU Coder dans Help Center et File Exchange

Tags

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by