Main Content

Get Started with Deep Learning FPGA Deployment on Xilinx ZCU102 SoC

This example shows how to create, compile, and deploy a dlhdl.Workflow object that has a handwritten character detection series network as the network object by using the Deep Learning HDL Toolbox™ Support Package for Xilinx FPGA and SoC. Use MATLAB® to retrieve the prediction results from the target device.

Prerequisites

  • Xilinx ZCU102 SoC development kit.

  • Deep Learning HDL Toolbox™

  • Deep Learning HDL Toolbox™ Support Package for Xilinx FPGA and SoC

  • Deep Learning Toolbox™

Load the Pretrained Series Network

To load the pretrained series network, that has been trained on the Modified National Institute Standards of Technology (MNIST) database[1], enter:

snet = getDigitsNetwork;

To view the layers of the pretrained series network, enter:

analyzeNetwork(snet)

Create Target Object

Create a target object that has a custom name for your target device and an interface to connect your target device to the host computer. Interface options are JTAG and Ethernet.

hTarget = dlhdl.Target('Xilinx','Interface','Ethernet')
hTarget = 
  Target with properties:

       Vendor: 'Xilinx'
    Interface: Ethernet
    IPAddress: '192.168.1.101'
     Username: 'root'
         Port: 22

Create WorkFlow Object

Create an object of the dlhdl.Workflow class. Specify the network and the bitstream name during the object creation. Specify saved pretrained MNIST neural network, snet, as the network. Make sure that the bitstream name matches the data type and the FPGA board that you are targeting. In this example, the target FPGA board is the Xilinx ZCU102 SOC board and the bitstream uses a single data type.

hW = dlhdl.Workflow('network', snet, 'Bitstream', 'zcu102_single','Target',hTarget)

Compile the MNIST Series Network

To compile the MNIST series network, run the compile function of the dlhdl.Workflow object.

dn = hW.compile;
### Compiling network for Deep Learning FPGA prototyping ...
### Targeting FPGA bitstream zcu102_single ...
### The network includes the following layers:

     1   'imageinput'    Image Input             28×28×1 images with 'zerocenter' normalization                (SW Layer)
     2   'conv_1'        Convolution             8 3×3×1 convolutions with stride [1  1] and padding 'same'    (HW Layer)
     3   'batchnorm_1'   Batch Normalization     Batch normalization with 8 channels                           (HW Layer)
     4   'relu_1'        ReLU                    ReLU                                                          (HW Layer)
     5   'maxpool_1'     Max Pooling             2×2 max pooling with stride [2  2] and padding [0  0  0  0]   (HW Layer)
     6   'conv_2'        Convolution             16 3×3×8 convolutions with stride [1  1] and padding 'same'   (HW Layer)
     7   'batchnorm_2'   Batch Normalization     Batch normalization with 16 channels                          (HW Layer)
     8   'relu_2'        ReLU                    ReLU                                                          (HW Layer)
     9   'maxpool_2'     Max Pooling             2×2 max pooling with stride [2  2] and padding [0  0  0  0]   (HW Layer)
    10   'conv_3'        Convolution             32 3×3×16 convolutions with stride [1  1] and padding 'same'  (HW Layer)
    11   'batchnorm_3'   Batch Normalization     Batch normalization with 32 channels                          (HW Layer)
    12   'relu_3'        ReLU                    ReLU                                                          (HW Layer)
    13   'fc'            Fully Connected         10 fully connected layer                                      (HW Layer)
    14   'softmax'       Softmax                 softmax                                                       (SW Layer)
    15   'classoutput'   Classification Output   crossentropyex with '0' and 9 other classes                   (SW Layer)

3 Memory Regions created.

Skipping: imageinput
Compiling leg: conv_1>>relu_3 ...
### Optimizing series network: Fused 'nnet.cnn.layer.BatchNormalizationLayer' into 'nnet.cnn.layer.Convolution2DLayer'
### Notice: (Layer  1) The layer 'data' with type 'nnet.cnn.layer.ImageInputLayer' is implemented in software.
### Notice: (Layer 10) The layer 'output' with type 'nnet.cnn.layer.RegressionOutputLayer' is implemented in software.
Compiling leg: conv_1>>relu_3 ... complete.
Compiling leg: fc ...
### Notice: (Layer  1) The layer 'data' with type 'nnet.cnn.layer.ImageInputLayer' is implemented in software.
### Notice: (Layer  3) The layer 'output' with type 'nnet.cnn.layer.RegressionOutputLayer' is implemented in software.
Compiling leg: fc ... complete.
Skipping: softmax
Skipping: classoutput
Creating Schedule...
.......
Creating Schedule...complete.
Creating Status Table...
......
Creating Status Table...complete.
Emitting Schedule...
......
Emitting Schedule...complete.
Emitting Status Table...
........
Emitting Status Table...complete.

### Allocating external memory buffers:

          offset_name          offset_address    allocated_space 
    _______________________    ______________    ________________

    "InputDataOffset"           "0x00000000"     "4.0 MB"        
    "OutputResultOffset"        "0x00400000"     "4.0 MB"        
    "SchedulerDataOffset"       "0x00800000"     "4.0 MB"        
    "SystemBufferOffset"        "0x00c00000"     "28.0 MB"       
    "InstructionDataOffset"     "0x02800000"     "4.0 MB"        
    "ConvWeightDataOffset"      "0x02c00000"     "4.0 MB"        
    "FCWeightDataOffset"        "0x03000000"     "4.0 MB"        
    "EndOffset"                 "0x03400000"     "Total: 52.0 MB"

### Network compilation complete.

Program Bitstream onto FPGA and Download Network Weights

To deploy the network on the Xilinx ZCU102 SoC hardware, run the deploy function of the dlhdl.Workflow object. This function uses the output of the compile function to program the FPGA board by using the programming file. It also downloads the network weights and biases. The deploy function starts programming the FPGA device, displays progress messages, and the time it takes to deploy the network.

hW.deploy
### Programming FPGA Bitstream using Ethernet...
Downloading target FPGA device configuration over Ethernet to SD card ...
# Copied /tmp/hdlcoder_rd to /mnt/hdlcoder_rd
# Copying Bitstream hdlcoder_system.bit to /mnt/hdlcoder_rd
# Set Bitstream to hdlcoder_rd/hdlcoder_system.bit
# Copying Devicetree devicetree_dlhdl.dtb to /mnt/hdlcoder_rd
# Set Devicetree to hdlcoder_rd/devicetree_dlhdl.dtb
# Set up boot for Reference Design: 'AXI-Stream DDR Memory Access : 3-AXIM'

Downloading target FPGA device configuration over Ethernet to SD card done. The system will now reboot for persistent changes to take effect.


System is rebooting . . . . . .
### Programming the FPGA bitstream has been completed successfully.
### Loading weights to Conv Processor.
### Conv Weights loaded. Current time is 30-Dec-2020 15:13:03
### Loading weights to FC Processor.
### FC Weights loaded. Current time is 30-Dec-2020 15:13:03

Run Prediction for Example Image

To load the example image, execute the predict function of the dlhdl.Workflow object, and then display the FPGA result, enter:

inputImg = imread('five_28x28.pgm');

Run prediction with the profile 'on' to see the latency and throughput results.

[prediction, speed] = hW.predict(single(inputImg),'Profile','on');
### Finished writing input activations.
### Running single input activations.


              Deep Learning Processor Profiler Performance Results

                   LastFrameLatency(cycles)   LastFrameLatency(seconds)       FramesNum      Total Latency     Frames/s
                         -------------             -------------              ---------        ---------       ---------
Network                      98117                  0.00045                       1              98117           2242.2
        conv_1                6607                  0.00003 
        maxpool_1             4716                  0.00002 
        conv_2                4637                  0.00002 
        maxpool_2             2977                  0.00001 
        conv_3                6752                  0.00003 
        fc                   72428                  0.00033 
 * The clock frequency of the DL processor is: 220MHz
[val, idx] = max(prediction);
fprintf('The prediction result is %d\n', idx-1);
The prediction result is 5

Bibliography

  1. LeCun, Y., C. Cortes, and C. J. C. Burges. "The MNIST Database of Handwritten Digits." http://yann.lecun.com/exdb/mnist/.

Related Topics