Main Content

Quantization Workflow Prerequisites

This table lists the products required to quantize and deploy deep learning networks.

 Execution Environment
Development Host RequirementsFPGAGPUCPUMATLAB
Setup Toolkit Environment

hdlsetuptoolpath (HDL Coder)

The quantization workflow does not support the MinGW C/C++ compiler. Use Microsoft Visual C++ 2017 or Microsoft Visual C++ 2015.

For a list of supported compilers, see Supported and Compatible Compilers.

Setting Up the Prerequisite Products (GPU Coder)

The quantization workflow does not support the MinGW C/C++ compiler. Use Microsoft Visual C++ 2017 or Microsoft Visual C++ 2015.

For a list of supported compilers, see Supported and Compatible Compilers.

Prerequisites for Deep Learning with MATLAB Coder (MATLAB Coder)

The quantization workflow does not support the MinGW C/C++ compiler. Use Microsoft Visual C++ 2017 or Microsoft Visual C++ 2015.

For a list of supported compilers, see Supported and Compatible Compilers.

Only Raspberry Pi™ with ARM® v7 architecture is supported.

The ARM Compute Library version 20.02.1 for deep learning quantized inference.

The quantization workflow does not support the MinGW C/C++ compiler. Use Microsoft Visual C++ 2017 or Microsoft Visual C++ 2015.

For a list of supported compilers, see Supported and Compatible Compilers.

Required Products
  • Deep Learning Toolbox™

  • Deep Learning HDL Toolbox™

Deep Learning Toolbox

  • Deep Learning Toolbox

  • MATLAB Coder

  • Embedded Coder

Deep Learning Toolbox
Required Support Packages
  • Deep Learning Toolbox Model Quantization Library

  • Deep Learning HDL Toolbox Support Package for Xilinx® FPGA and SoC Devices

  • Deep Learning HDL Toolbox Support Package for Intel® FPGA and SoC Devices

Deep Learning Toolbox Model Quantization Library

Deep Learning Toolbox Model Quantization Library

Deep Learning Toolbox Model Quantization Library
Required Add Ons

MATLAB® Coder™ Interface for Deep Learning Libraries

  • GPU Coder™ Interface for Deep Learning Libraries

  • CUDA® enabled NVIDIA® GPU with compute capability 6.1, 6.3 or higher.

MATLAB Coder Interface for Deep Learning LibrariesMATLAB Coder Interface for Deep Learning Libraries
Supported Networks and LayersSupported Networks, Layers, Boards, and Tools (Deep Learning HDL Toolbox)Supported Networks, Layers, and Classes (GPU Coder)Networks and Layers Supported for Code Generation (MATLAB Coder)Networks and Layers Supported for Code Generation (MATLAB Coder)

Note

When MATLAB is the Execution Environment, only the layers for the Intel MKL-DNN deep learning library are supported.

Deployment

Deep Learning HDL Toolbox

GPU Coder

For CUDA code generation, the software generates code for a convolutional deep neural network by quantizing the weights, biases, and activations of the convolution layers to 8-bit scaled integer data types. The quantization is performed by providing the calibration result file produced by the calibrate function to the codegen (MATLAB Coder) command.

Code generation does not support quantized deep neural networks produced by the quantize function.

MATLAB Coder

For C/C++ and CUDA code generation, the software generates code for a convolutional deep neural network by quantizing the weights, biases, and activations of the convolution layers to 8-bit scaled integer data types. The quantization is performed by providing the calibration result file produced by the calibrate function to the codegen (MATLAB Coder) command.

Code generation does not support quantized deep neural networks produced by the quantize function.

Note

Before validation, you must create a raspi object to establish connection to hardware.

None