Main Content

visionhdl.PixelStreamAligner

Align two streams of pixel data

Description

The visionhdl.PixelStreamAligner System object™ synchronizes two pixel streams by delaying one stream to match the timing of a reference stream. Many Vision HDL Toolbox™ algorithms delay the pixel stream, and the amount of delay can change as you adjust algorithm parameters. You can use this object to align streams for overlaying, comparing, or combining two streams such as in a Gaussian blur operation. Use the delayed stream as the refpixel and refctrl arguments. Use the earlier stream as the pixelin and ctrlin arguments.

This waveform diagram shows the input streams, pixelin and refpixel, and their associated control signals. The reference input frame starts later than the pixelin frame. The output signals show that the object delays pixelin to match the reference stream, and that both output streams share control signals. The waveform shows the short latency between the input ctrl and the output ctrl. In this simulation, to accommodate the delay of four lines between the input streams, the MaxNumberofLines property must be set to at least 4.

For details on the pixel control bus and the dimensions of a video frame, see Streaming Pixel Interface.

To align two streams of pixel data:

  1. Create the visionhdl.PixelStreamAligner object and set its properties.

  2. Call the object with arguments, as if it were a function.

To learn more about how System objects work, see What Are System Objects?

Creation

Description

aligner = visionhdl.PixelStreamAligner(Name,Value) creates a System object that synchronizes a pixel stream with a reference pixel stream. Set properties using one or more name-value pairs. Enclose each property name in single quotes. For example, 'MaxNumberOfLines',16 sets the buffer depth that accommodates the timing offset between the two input streams.

example

Properties

expand all

Unless otherwise indicated, properties are nontunable, which means you cannot change their values after calling the object. Objects lock when you call them, and the release function unlocks them.

If a property is tunable, you can change its value at any time.

For more information on changing property values, see System Design in MATLAB Using System Objects.

Size of the line memory buffer, specified as a power of two that accommodates the number of active pixels in a horizontal line. If you specify a value that is not a power of two, the object uses the next largest power of two. The object implements a circular buffer of 2M locations, where M is MaxNumberofLines + log2(LineBufferSize).

Buffer depth that accommodates the timing offset between input streams, specified as a positive integer. The object implements a circular buffer of 2M locations, where M is MaxNumberofLines + log2(LineBufferSize), and a line address buffer of MaxNumberofLines locations. The circular memory stores the earlier input lines until the reference control signals arrive. The line address buffer stores the address of the start of each line. When the reference control signals arrive, the object uses the stored address to read and send the delayed line. This property must accommodate the difference in timing between the two input streams, including the internal latency before the object reads the first line. During simulation, the object warns when an overflow occurs. To avoid the overflow condition, increase MaxNumberofLines. The delay between streams cannot exceed an entire frame.

Usage

Description

[pixelout,refout,ctrlout] = aligner(pixelin,ctrlin,refpixel,refctrl) synchronizes a pixel stream to a reference stream, refpixel and refctrl, by delaying the first input, pixelin, to align with the reference input. The resulting aligned pixel streams, pixelout and refout, share the control signals, ctrlout. You can use this object to align streams for overlay or comparison.

example

This object uses a streaming pixel interface with a structure for frame control signals. This interface enables the object to operate independently of image size and format and to connect with other Vision HDL Toolbox objects. The object accepts and returns a scalar pixel value and control signals as a structure containing five signals. The control signals indicate the validity of each pixel and its location in the frame. To convert a pixel matrix into a pixel stream and control signals, use the visionhdl.FrameToPixels object. For a description of the interface, see Streaming Pixel Interface.

Input Arguments

expand all

Input pixel stream, specified as a scalar intensity value or a row vector of three values representing one pixel in R'G'B' or Y'CbCr color space. Because the object delays this pixel stream to match the control signals of the reference stream, refpixel pixelin must be the earlier of the two streams.

You can simulate System objects with a multipixel streaming interface, but you cannot generate HDL code for System objects that use multipixel streams. To generate HDL code for multipixel algorithms, use the equivalent Simulink® blocks.

The software supports double and single data types for simulation, but not for HDL code generation.

Data Types: fi | uint | int | logical | double | single

Control signals accompanying the input pixel stream, specified as a pixelcontrol structure containing five logical data type signals. The signals describe the validity of the pixel and its location in the frame. For more details, see Pixel Control Structure.

Data Types: struct

Input pixel, specified as a scalar intensity value or a row vector of three values representing one pixel in R'G'B' or Y'CbCr colorspace. Because the object delays the pixelin input stream to match the reference control signals, refpixel must be the later of the two streams. The reference data and its control signals pass through the object with a small delay.

You can simulate System objects with a multipixel streaming interface, but you cannot generate HDL code for System objects that use multipixel streams. To generate HDL code for multipixel algorithms, use the equivalent Simulink blocks.

The software supports double and single data types for simulation, but not for HDL code generation.

Data Types: fi | uint | int | logical | double | single

Reference pixel stream control signals, specified as a structure containing five logical signals. The object uses these control signals for the aligned output stream. For more details, see Pixel Control Structure.

Output Arguments

expand all

Output pixel, returned as a scalar intensity value or a row vector of three values representing one pixel in R'G'B' or Y'CbCr color space.

The data type is the same as the data type of pixelin.

Data Types: fi | uint | int | logical | double | single

Output reference pixel, returned as a scalar intensity value or a row vector of three values representing one pixel in R'G'B' or Y'CbCr color space.

The data type is the same as the data type of refpixel.

Data Types: fi | uint | int | logical | double | single

Pixel stream control signals for both output streams, returned as a structure containing five logical signals. For more details, see Pixel Control Structure. These signals are a delayed version of the refctrl input.

Data Types: struct

Object Functions

To use an object function, specify the System object as the first input argument. For example, to release system resources of a System object named obj, use this syntax:

release(obj)

expand all

stepRun System object algorithm
releaseRelease resources and allow changes to System object property values and input characteristics
resetReset internal states of System object

Examples

collapse all

Overlay a processed video stream on the input stream.

Prepare a test image by selecting a portion of an image file.

frmActivePixels = 64;
frmActiveLines = 48;
frmOrig = imread('rice.png');
frmInput = frmOrig(1:frmActiveLines,1:frmActivePixels);
figure
imshow(frmInput,'InitialMagnification',300)
title 'Input Image'

Create a serializer and specify the size of the inactive pixel regions.

frm2pix = visionhdl.FrameToPixels( ...
      'NumComponents',1, ...
      'VideoFormat','custom', ...
      'ActivePixelsPerLine',frmActivePixels, ...
      'ActiveVideoLines',frmActiveLines, ...
      'TotalPixelsPerLine',frmActivePixels+10, ...
      'TotalVideoLines',frmActiveLines+10, ...
      'StartingActiveLine',6, ...
      'FrontPorch',5);

Serialize the test image. pixIn is a vector of intensity values. ctrlIn is a vector of control signal structures. Preallocate vectors for the output signals.

[pixIn,ctrlIn] = frm2pix(frmInput);

[~,~,numPixelsPerFrame] = getparamfromfrm2pix(frm2pix);
ctrlOut = repmat(pixelcontrolstruct,numPixelsPerFrame,1);
overlayOut = zeros(numPixelsPerFrame,1,'uint8');

Write a function that creates and calls the System objects to detect edges and align the edge data with the original pixel data. The edge results are delayed by the latency of the visionhdl.EdgeDetector object. The associated control signals become the reference for the aligned stream. You can generate HDL from this function.

function [pixelOut,ctrlOut] = EdgeDetectandOverlay(pixelIn,ctrlIn)
% EdgeDetectandOverlay 
% Detect edges in an input stream, and overlay the edge data onto the 
% original stream. 
% pixelIn and ctrlIn are a scalar pixel and its associated pixelcontrol
% structure, respectively.
% You can generate HDL code from this function.

  persistent align
  if isempty(align)
    align = visionhdl.PixelStreamAligner;
  end    
  
  persistent find_edges
  if isempty(find_edges)
    find_edges = visionhdl.EdgeDetector;
  end
  
  [edgeOut,edgeCtrl] = find_edges(pixelIn,ctrlIn);
  [origOut,alignedEdgeOut,ctrlOut] = align(pixelIn,ctrlIn,edgeOut,edgeCtrl);
  if (alignedEdgeOut)
      pixelOut = uint8(0); % Set edge pixels to black
  else
      pixelOut = origOut;
  end
end

For each pixel in the frame, call the function to search for edges and align the edge data with the input stream.

for p = 1:numPixelsPerFrame
    [overlayOut(p),ctrlOut(p)] = EdgeDetectandOverlay(pixIn(p),ctrlIn(p));
end

Create a deserializer object with a format matching that of the serializer. Convert the pixel stream to an image frame by calling the deserializer object. Display the resulting image.

pix2frm = visionhdl.PixelsToFrame( ...
      'NumComponents',1, ...
      'VideoFormat','custom', ...
      'ActivePixelsPerLine',frmActivePixels, ...
      'ActiveVideoLines',frmActiveLines, ...
      'TotalPixelsPerLine',frmActivePixels+10);
[frmOutput,frmValid] = pix2frm(overlayOut,ctrlOut);
if frmValid
    figure
    imshow(frmOutput,'InitialMagnification',300)
    title 'Output Image'
end

Extended Capabilities

Version History

Introduced in R2017a

expand all