Deep Learning HDL Workflow “Data size mismatch” after deployment – Possible device tree / AXI DMA configuration issue (ZCU111)

Hello,
I am working with Deep Learning HDL Toolbox on a custom reference design
with the Xilinx ZCU111 RFSoC board. I am able to successfully compile the network,
but I am encountering a deployment error.
Setup: - Board: ZCU111 RFSoC - Interface: PS GEM Ethernet - Reference design:
Custom (based on AXI-Stream DDR Memory Access : 3-AXIM) - Tool versions:
Vivado 2024.1, MATLAB (Deep Learning HDL Toolbox)
Workflow: I compile and deploy the network using: 1. compile(hW) 2.
deploy(hW) 3. predict(…)
Problem: The FPGA is programmed successfully, and the system reboots
correctly: - SSH connection is restored - Ping works
However, during deployment/predict, I get the following error on MATLAB:
Connection to the bitstream is no longer valid caused by error: Data
size mismatch.
Observations: - Bitstream programming is completed successfully - Device
tree is loaded and system boots - The error happens after deployment,
during runtime communication with FPGA.
Device Tree Concern: I suspect that the issue may be related to my device
tree definitions, especially: - dlprocessor IP - AXI stream to
memory-mapped interface (AXIS2AXIM / AXI2SMM) - DMA nodes (MM2S / S2MM)
Possible causes: Incorrect child node definitions
Questions: 1. What are the common causes of “Data size mismatch” in Deep
Learning HDL deployments? 2. Can this error be caused by incorrect
device tree configuration ? 3. Are there specific DTB
requirements for dlprocessor, AXIS2AXIM, and DMA nodes? 4. How can I
verify MATLAB runtime correctly binds to DTB nodes?
Also is there any official or working example of devicetree_dlhdl.dtb for:
  • Deep Learning HDL Toolbox
  • AXI-Stream DDR Memory Access (3-AXIM) reference design
  • ZCU111 (or similar Zynq UltraScale+ platforms)
I am especially interested in correct definitions for:
  • dlprocessor
  • AXIS2AXIM / AXI2SMM
  • DMA nodes (MM2S / S2MM)
  • mathworks-specific properties (mwipcore, channels, etc.)
If anyone has a working DTB or can point to an example (documentation, repo, or generated output), it would be very helpful.
Thanks!

Réponses (2)

Some of the common causes of “Data Mismatch” in Deep Learning HDL deployments are likely:
  1. Stream data width / packing mismatch: If your AXI DMA / AXIS2AXIM path is 64-bit on one side and 32-bit on the other, or if the MathWorks stream channel is described with the wrong “data-format”, the host can write N bytes but the receive side observes a different number of bytes and throws “size mismatch”.
  2. DMA configuration mismatch: If your DT describes DMA as simple mode but hardware is SG (or vice versa), the driver may “probe” but runtime transfers behave incorrectly.
  3. “Sample count register” / transfer length register mismatch: MathWorks’ streaming/IIO channel model commonly uses a sample-count register (or equivalent) to describe how much to transfer per trigger. If the DT points to the wrong register offset (or the IP changed), the host programs one length while hardware reads another, hence mismatch.
  4. Wrong base addresses in DT vs Vivado Address Editor: If reg = <base size> is wrong for mwipcore / dlprocessor / DMA, you can still see devices show up, but reads/writes hit the wrong registers, leading to invalid length/format programming.
  5. Cache/DDR coherency assumptions: Less common for a clean “data size mismatch” message (more often you see corrupted outputs), but if your AXIS2AXIM path relies on ACP/HP port assumptions and the Linux mapping is not coherent, you can see partial transfers and “short reads/writes”.
You may find the below documentations helpful:
  1. Deploy and Verify Modulation Classification on RFSoC Devices - MATLAB & Simulink
  2. Get Started with Deep Learning FPGA Deployment on Xilinx ZCU102 SoC - MATLAB & Simulink
If the above resources do not help you, I would recommend you reach out to MathWorks Technical Support through https://www.mathworks.com/company/aboutus/contact_us.html.
Ismail Sercan
Ismail Sercan le 18 Avr 2026 à 8:31
Modifié(e) : Ismail Sercan le 18 Avr 2026 à 19:00
Hi,
Thank you for the detailed explanation regarding possible causes of the “Data Mismatch” issue. I reviewed my design carefully against each of the points you mentioned and would like to share my findings:
  1. Stream data width / packing mismatchIn my design, the AXI DMA and AXIS2AXIM data paths are consistently configured to 64-bit:
  • In the Vivado block design (system_top.tcl), both MM2S and S2MM paths use 64-bit widths.
  • In the device tree, the corresponding channels use:
  • mathworks,data-format = "u64/64>>0"
  • xlnx,datawidth = 0x40
So I do not see a 32-bit / 64-bit mismatch or packing inconsistency.
  1. DMA configuration mismatch (simple vs scatter-gather)Both DMA engines are configured for scatter-gather mode:
  • In hardware: SG ports are connected and c_sg_length_width is set.
  • In the device tree: xlnx,include-sg and xlnx,sg-length-width are present.
Therefore, device tree and hardware appear consistent in terms of DMA mode.
  1. Base address mismatch between Vivado and device treeThe base addresses in Vivado and DT match:
  • dlprocessor → 0xA0000000
  • AXIS2AXIM → 0xA0010000
  • axi_dma_mm2s → 0xA0020000
  • axi_dma_s2mm → 0xA0030000
So I do not observe any incorrect reg mapping.
  1. Potential issue: sample count / transfer length registerIn the DT, I noticed:
  • mathworks,sample-cnt-reg = 0x40 for AXIS2AXIM stream channels.
However, I am not fully certain that this offset matches the actual register map of the AXIS2AXIM IP version used in my design. If the driver writes the transfer length to an incorrect register offset, this could explain the “data size mismatch” behavior.
  1. mwipcore_dl / channel naming and runtime exposureThe DT defines:
  • mwipcore_dl@a0000000
  • mmrd-channel@0mathworks,dev-name = "mmrd0"
  • mmwr-channel@1mathworks,dev-name = "mmwr0"
However, in some MathWorks documentation/examples I see naming like:
  • mwipcore_dl0:mmrd0
I suspect there might be a subtle mismatch between DT node naming and what the driver expects at runtime (especially in /sys/class/mathworks_ip or IIO bindings).Conclusion / Current Hypothesis
Based on this analysis, I do not see evidence of:
  • data width mismatch
  • DMA mode mismatch
  • base address mismatch
At this point, I suspect that the issue may be related either to:
  • an incorrect register offset (e.g., sample-cnt-reg), or
  • a mismatch between device tree node naming and what the MathWorks driver expects at runtime.
Also, to clarify my current device tree question: in my earlier setup, I did not generate devicetree_dlhdl.dtb using MATLAB. I created it manually based on an example DTB/device tree and adapted it to my design. Now I am trying to switch to the MATLAB-supported workflow for generating the device tree from the reference design, and I would like to understand exactly how and when MATLAB is expected to generate devicetree_dlhdl.dtb.
Device Tree Generation Question
In my reference design, I define the device tree as follows:
hRD.addDeviceTree('refdesign.dts');
hRD.GenerateIPCoreDeviceTreeNodes = true;
hRD.DeviceTreeName = 'devicetree_dlhdl.dtb';
I also create a workflow using:
workflow = hdlcoder.Workflow.DeepLearningProcessor;
and deploy using:
hW = dlhdl.Workflow('Network', net, 'Bitstream', 'dlprocessor.bit', 'Target', hTarget);
compile(hW);
deploy(hW);
However, I do not see any generated devicetree_dlhdl.dtb or related .dts/.dtsi files in my build folders.
From the documentation:
it is mentioned that the device tree is generated by combining:
  • Board segment
  • Reference design segment
  • HDL Coder IP core segment
My questions are:
  1. At which exact step is devicetree_dlhdl.dtb generated?
  2. Is additional configuration required in the reference design (beyond addDeviceTree) to enable device tree generation?
  3. Can you provide an example of devicetree_dlhdl.dtb or devicetree_dlhdl.dtsi from Deep Learning applications?
I would appreciate clarification on the correct workflow to generate and use devicetree_dlhdl.dtb.
Thanks.

Catégories

Question posée :

le 11 Avr 2026 à 18:09

Modifié(e) :

le 18 Avr 2026 à 19:00

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by