This is machine translation

Translated by Microsoft
Mouseover text to see original. Click the button below to return to the English version of the page.

Note: This page has been translated by MathWorks. Click here to see
To view all translated materials including this page, select Country from the country navigator on the bottom of this page.

Installing Prerequisite Products

To use GPU Coder™ for CUDA® C/C++ code generation, you must install the following products:

MathWorks Products

  • MATLAB® (required).

  • MATLAB Coder™ (required).

  • Parallel Computing Toolbox™ (required).

  • Neural Network Toolbox™(required for deep learning).

  • Image Processing Toolbox™(recommended).

  • Embedded Coder® (recommended).

  • Simulink® (recommended).


If MATLAB is installed on a path that contains non 7-bit ASCII characters, such as Japanese characters, MATLAB Coder does not work because it cannot locate code generation library functions.

For instructions on installing MathWorks® products, see the MATLAB installation documentation for your platform. If you have installed MATLAB and want to check which other MathWorks products are installed, enter ver in the MATLAB Command Window.

Third-party Products

GPU Code Generation from MATLAB

  • NVIDIA® GPU enabled for CUDA with compute capability 3.2 or higher (Is my GPU supported?).

  • CUDA toolkit and driver. The default installation comes with the nvcc compiler, cuFFT, cuBLAS, and cuSOLVER libraries. GPU Coder has been tested with CUDA toolkit v9.0 (Get the CUDA toolkit).

  • C/C++ Compiler:



    GCC C/C++ compiler 6.3.x

    Microsoft® Visual Studio® 2013

    Microsoft Visual Studio 2015

    Microsoft Visual Studio 2017

    The NVIDIA nvcc compiler supports multiple versions of GCC and therefore you can generate CUDA code with other versions of GCC. However, there may be compatibility issues when executing the generated code from MATLAB as the C/C++ run-time libraries that are included with the MATLAB installation are compiled for GCC 6.3.

Code Generation for Deep Learning Networks

The code generation requirements for deep learning networks depends on the platform you are targeting.

Hardware Requirements

CUDA enabled GPU with compute capability 3.2 or higher.

Targeting NVIDIA TensorRT™ libraries with INT8 precision requires a CUDA GPU with minimum compute capability of 6.0.

Intel Xeon® processor with support for Intel Advanced Vector Extensions 2 (Intel AVX2) instructions.

ARM Cortex-A processors that support the NEON extension.

Software Libraries

CUDA Deep Neural Network library (cuDNN) v7.0.

NVIDIA TensorRT – high performance deep learning inference optimizer and runtime library, v3.0.

Intel Math Kernel Library for Deep Neural Networks, v0.11 (Intel MKL-DNN).

ARM Compute Library for computer vision and machine learning, v18.01

Operating System Support

cuDNN support is on Windows and Linux.

TensorRT support is only on Linux.

Linux only.

Linux only.


Open Source Computer Vision Library (OpenCV), v3.1.0 is required for deep learning examples.

Note: The examples require separate libs such as, opencv_core.libopencv_video.lib. By default, the binary version of OpenCV - 3.1.0 does not contain these separate libraries, so you must build OpenCV from source for the compiler you are using. Refer to the OpenCV documentation for more information.

Code Generation for Embedded GPU Boards - NVIDIA Tegra Based Jetson TX2, TX1, and TK1

  • CUDA toolkit for ARM and Linaro GCC 4.9 toolchain for the TX2. Use the gcc-linaro-4.9-2016.02-x86_64_aarch64-linux-gnu release tarball.

  • CUDA toolkit for ARM and Linaro GCC 4.9 toolchain for the TX1.

  • CUDA toolkit 6.5 for ARM and Linaro GCC 4.8 toolchain for the TK1. Use the gcc-linaro-arm-linux-gnueabihf-4.8-2013.08_linux release tarball.

To set up the Linaro tools, see the instructions on Cross-Compilation on Linux.


Embedded GPU targeting is supported only from the Linux platform.

Was this topic helpful?