Main Content

System Integration of Deep Learning Processor IP Core

Generate the deep learning (DL) processor IP core by using HDL Coder™ and Deep Learning HDL Toolbox™. Integrate the generated deep learning (DL) processor IP core into your system design manually or by using HDL Coder and IP core generation workflow

You can integrate the deep learning processor IP core into your system by:

  • Generating and integrating DL Processor IP Core—Generate a generic deep learning processor IP core by using Deep Learning HDL Toolbox. The generated deep learning processor IP core is a generic HDL Coder IP core with standard AXI4 interfaces. You can integrate the generated generic DL IP core into you Vivado® or Quartus® design.

    Accelerate the integration of the generated DL processor IP core into your system design by:

    • Reading the AXI4 register maps in the generated IP core report. The AXI4 registers allow MATLAB® or other AXI4 Master devices to control and program the DL processor IP core.

    • Using the compiler generated external memory buffer allocation.

    • Formatting the input and output external memory data.

    Manually integrate generic DL processor IP core
  • Reference design based DL Processor IP core integration—Generate a generic deep learning processor IP core by using Deep Learning HDL Toolbox. Integrate the generated deep learning processor IP core into your custom reference design by using HDL Coder. See Custom Reference Design (HDL Coder). You can design the pre-processing and post processing DUT logic in Simulink® or MATLAB, and use the HDL Coder IP core generation workflow to integrate the pre-processing and post-processing logic with the deep learning processor.

    Reference design based deep learning processor IP core integration

    Use MATLAB to run your custom deep learning network on the deep learning processor IP core and retrieve the deep learning network prediction results from you integrated system design.

Functions

expand all

dlhdl.WorkflowConfigure deployment workflow for deep learning neural network
compile Compile workflow object
deploy Deploy the specified neural network to the target FPGA board
predictRun inference on deployed network and profile speed of neural network deployed on specified target device
hdlcoder.ReferenceDesignReference design registration object that describes SoC reference design
registerDeepLearningMemoryAddressSpace Add memory address space to reference design
registerDeepLearningTargetInterfaceAdd and register a target interface
validateReferenceDesignForDeepLearningChecks property values in reference design object

Topics

Generate and Integrate DL Processor IP Core

Deep Learning Processor IP Core

Learn about the generated deep learning processor IP core.

Use the Compiler Output for System Integration

Use the compiler outputs to integrate the generated deep learning processor IP core into you design.

External Memory Data Format

Define the input and output external memory data format.

Deep Learning Processor Register Map

Use MATLAB or other AXI4 master devices to control and program the deep learning processor IP core.

Interface with the Deep Learning Processor IP Core

Choose between batch processing mode and streaming mode to process multiple data frames.

Reference Design Based DL Processor IP Core Integration

Define Custom Board and Reference Design for Deep Learning Processor IP Core Workflow

This example shows how to define and register a custom board and reference design for the deep learning processor IP core workflow.

Featured Examples