How can I synthesize the Stereo Image Rectification simulink model for Intel Arria?
4 vues (au cours des 30 derniers jours)
Afficher commentaires plus anciens
Yuval Levental
le 12 Jan 2021
Réponse apportée : Yuval Levental
le 18 Avr 2021
This is the simulink model: https://www.mathworks.com/help/visionhdl/ug/stereoscopic-rectification.html
It says at the bottom that "This design was synthesized for the Intel Arria 10 GX (115S2F45I1SG) FPGA." I tried using the HDL workflow advisor for this board, but at the target platform interfaces, I couldn't find the Pixel Control bus input or output.
Also, I want to create the IP Core first.
Eventually, I want to synthesize this model for Zynq hardware https://www.mathworks.com/help/supportpkg/xilinxzynqbasedvision/examples/developing-vision-algorithms-for-zynq-based-hardware.html
0 commentaires
Réponse acceptée
Steve Kuznicki
le 13 Jan 2021
If you want to target a Zynq device, I would work towards that.
There are a few ways that you can create the IP Core and deploy this for use in Vivado IPI. This really depends on what (video format) your camera acquisition IP Cores deliver. This could be MIPI, CameraLink, AXI-Stream-Video or some other custom format.
Since there are no default platform target that supports 2 pixelcontrol bus ports, you will need to either
1) create your own custom reference design plugin_rd.m file that has 2 pixel streaming inputs/outputs, or
2) wrap the top-level subsystem (HDLStereoImageRectification) in another subsystem that contains your "adaptor" to interface to your stereo (pair) of streaming interfaces. This subsystem would have the same interface as your input/output video acquisition IP Cores.
3) you could just split out the pixelcontrol bus into its 5 control lines and generate a basic signal IP core. You would need to develop other IP Cores in Vivado (or Simulink) to translate to the pixel control bus protocol. This would be helpful if you really want to synthesize the code to get an accurate estimation on resources.
If you are looking at (Xilinx) AXI-Stream-Video interface, then our SoC Blockset product provides a HDL Workflow Advisor IP Core Generation target that allows multiple Video inteface inputs. This would generate an IP Core that has 2 AXI-Stream-Video input/output interfaces.
In the end, if you really want to deploy this, you will need to determine what form/video format your input cameras are in. Typically, stereo applications rely on sensors directly interfaced to the FPGA (e.g. MIPI-CSI2 or an LVDS interface).
Plus de réponses (1)
Voir également
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!