Main Content

Prerequisites for Deep Learning with MATLAB Coder

MathWorks Products

To use MATLAB® Coder™ to generate code for deep learning networks, you must also install:

  • Deep Learning Toolbox™

  • MATLAB Coder Interface for Deep Learning

Generate Code That Does Not Use Third-Party Libraries

You can use MATLAB Coder to generate generic C or C++ code for deep learning networks. Such C or C++ code does not depend on third-party libraries. For more information, see Generate Generic C/C++ Code for Deep Learning Networks.

MATLAB Coder locates and uses a supported installed compiler. For the list of supported compilers, see Supported and Compatible Compilers on the MathWorks® website.

You can use mex -setup to change the default compiler. See Change Default Compiler.

The C++ compiler must support C++11.

On Windows®, to generate generic C or C++ code that does not use third-party libraries, use Microsoft® Visual Studio® or the MinGW® compiler.

Generate Code That Uses Third-Party Libraries

You can use MATLAB Coder to generate C++ code for deep learning networks that you deploy to Intel® or ARM® processors. The generated code takes advantage of deep learning libraries optimized for the target CPU. The hardware and software requirements depend on the target platform.

Note

The paths to the required software libraries must not contain spaces or special characters, such as parentheses. On Windows operating systems, special characters and spaces are allowed only if 8.3 file names are enabled. For more information on 8.3 file names, refer to the Windows documentation.

Hardware and Software Requirements

 Intel CPUsARM Cortex-A CPUs
Hardware Requirements

Intel processor with support for Intel Advanced Vector Extensions 2 (Intel AVX2) instructions.

ARM Cortex-A processors that support the NEON extension.

Software Libraries

Intel Math Kernel Library for Deep Neural Networks (MKL-DNN), v1.4. See https://github.com/oneapi-src/oneDNN.

Do not use a prebuilt library because some required files are missing. Instead, build the library from the source code. See instructions for building the library on GitHub®.

For more information on build, see this post in MATLAB Answers™: How do I build the intel MKL-DNN library for Deep Learning C++ code generation and deployment.

Usage notes:

  • When generating MEX functions that run on your MATLAB host computer, it is recommended that you use the MKL-DNN target instead of generating generic C/C++ code. Generated code that uses the MKL-DNN library is likely to have better performance than generic code.

  • For a few networks, the performance of MKL-DNN library might be slow on an AVX2 machine. Use AVX512 machines to leverage full performance capability of MKL-DNN with generated code.

ARM Compute Library for computer vision and machine learning, versions 19.05 and 20.02.1. See https://developer.arm.com/ip-products/processors/machine-learning/compute-library.

Specify the version number in a coder.ARMNEONConfig configuration object. The default version number is v20.02.1.

Do not use a prebuilt library because it might be incompatible with the compiler on the ARM hardware. Instead, build the library from the source code. Build the library on either your host machine or directly on the target hardware. See instructions for building the library on GitHub.

The folder that contains the library files such as libarm_compute.so should be named lib. If the folder is named build, rename the folder to lib.

For more information on build, see this post in MATLAB Answers: How do I build the ARM Compute Library for Deep Learning C++ code generation and deployment.

To deploy generated code that performs inference computations in 8-bit integers on ARM processors, you must use ARM Compute library version 20.02.1.

Operating System Support

Windows, Linux®, and macOS.

Windows and Linux only.

Supported Compilers

MATLAB Coder locates and uses a supported installed compiler. For the list of supported compilers, see Supported and Compatible Compilers on the MathWorks website.

You can use mex -setup to change the default compiler. See Change Default Compiler.

The C++ compiler must support C++11.

On Windows, to generate code that uses the Intel MKL-DNN library by using the codegen command, use Microsoft Visual Studio 2015 or later.

Note

On Windows, for generating MEX function that uses the Intel MKL-DNN library, the MinGW compiler is not supported.

Other

Open Source Computer Vision Library (OpenCV), v3.1.0 is required for the ARM Cortex-A based deep learning examples.

Note: The examples require separate libraries such as opencv_core.lib and opencv_video.lib. The OpenCV library that ships with Computer Vision Toolbox™ does not have the required libraries and the OpenCV installer does not install them. Therefore, you must download the OpenCV source and build the libraries.

For more information, refer to the OpenCV documentation.

Environment Variables

MATLAB Coder uses environment variables to locate the libraries required to generate code for deep learning networks.

PlatformVariable NameDescription
WindowsINTEL_MKLDNN

Path to the root folder of the Intel MKL-DNN library installation.

For example:

C:\Program Files\mkl-dnn

ARM_COMPUTELIB

Path to the root folder of the ARM Compute Library installation on the ARM target hardware.

For example:

/usr/local/arm_compute

Set ARM_COMPUTELIB on the ARM target hardware.

CMSISNN_PATH

Path to the root folder of the CMSIS-NN library installation on the ARM target hardware.

For example:

/usr/local/cmsis_nn

Set CMSISNN_PATH on the ARM target hardware.

PATH

Path to the Intel MKL-DNN library folder.

For example:

C:\Program Files\mkl-dnn\lib

LinuxLD_LIBRARY_PATH

Path to the Intel MKL-DNN library folder.

For example:

/usr/local/mkl-dnn/lib/

Path to the ARM Compute Library folder on the target hardware.

For example:

/usr/local/arm_compute/lib/

Set LD_LIBRARY_PATH on the ARM target hardware.

INTEL_MKLDNN

Path to the root folder of the Intel MKL-DNN library installation.

For example:

/usr/local/mkl-dnn/

ARM_COMPUTELIB

Path to the root folder of the ARM Compute Library installation on the ARM target hardware.

For example:

/usr/local/arm_compute/

Set ARM_COMPUTELIB on the ARM target hardware.

CMSISNN_PATH

Path to the root folder of the CMSIS-NN library installation on the ARM target hardware.

For example:

/usr/local/cmsis_nn

Set CMSISNN_PATH on the ARM target hardware.

macOSINTEL_MKLDNN

Path to the root folder of the Intel MKL-DNN library installation.

For example:

/usr/local/mkl-dnn

UNIX® based OS on ARM Cortex-A targetsOPENCV_DIR

Path to the build folder of OpenCV. Install OpenCV for deep learning examples that use OpenCV.

For example:

/usr/local/opencv/build

Note

To generate code for Raspberry Pi® using the MATLAB Support Package for Raspberry Pi Hardware, you must set the environment variables non-interactively. For instructions, see https://www.mathworks.com/matlabcentral/answers/455591-matlab-coder-how-do-i-setup-the-environment-variables-on-arm-targets-to-point-to-the-arm-compute-li.

Note

To build and run examples that use OpenCV, you must install the OpenCV libraries on the target board. For OpenCV installations on Linux, make sure that the path to the library files and the path to the header files are on the system path. By default, the library and header files are installed in a standard location such as /usr/local/lib/ and /usr/local/include/opencv, respectively.

For OpenCV installations on the target board, set the OPENCV_DIR and PATH environment variables as described in the previous table.

Note

You might be able to improve the performance of the code generated for Intel CPU-s by setting environment variables that control the binding of OpenMP threads to physical processing units. For example, on the Linux platform, set the KMP_AFFINITY environment variable to scatter. For other platforms using Intel CPU-s, you might be able to set similar environment variables to improve the performance of the generated code.

Related Topics