Main Content

vision.BlockMatcher

Estimate motion between images or video frames

Description

To estimate motion between images or video frames.

  1. Create the vision.BlockMatcher object and set its properties.

  2. Call the object with arguments, as if it were a function.

To learn more about how System objects work, see What Are System Objects?

Creation

Description

example

blkMatcher = vision.BlockMatcher returns an object, blkMatcher, that estimates motion between two images or two video frames. The object performs this estimation using a block matching method by moving a block of pixels over a search region.

blkMatcher = vision.BlockMatcher(Name,Value) sets properties using one or more name-value pairs. Enclose each property name in quotes. For example, blkMatcher = vision.BlockMatcher('ReferenceFrameSource','Input port')

Properties

expand all

Unless otherwise indicated, properties are nontunable, which means you cannot change their values after calling the object. Objects lock when you call them, and the release function unlocks them.

If a property is tunable, you can change its value at any time.

For more information on changing property values, see System Design in MATLAB Using System Objects.

Reference frame source, specified as 'Input port' or 'Property'. When you set the ReferenceFrameSource property to 'Input port', a reference frame input must be specified to the step method of the block matcher object.

Number of frames between reference and current frames, specified as a scalar integer greater than or equal to zero. This property applies when you set the ReferenceFrameSource property to 'Property'.

Best match search method, specified as 'Exhaustive' or 'Three-step'. Specify how to locate the block of pixels in frame k+1 that best matches the block of pixels in frame k. If you set this property to 'Exhaustive', the block matcher object selects the location of the block of pixels in frame k+1. The block matcher does so by moving the block over the search region one pixel at a time, which is computationally expensive.

If you set this property to 'Three-step', the block matcher object searches for the block of pixels in frame k+1 that best matches the block of pixels in frame k using a steadily decreasing step size. The object begins with a step size approximately equal to half the maximum search range. In each step, the object compares the central point of the search region to eight search points located on the boundaries of the region and moves the central point to the search point whose values is the closest to that of the central point. The object then reduces the step size by half, and begins the process again. This option is less computationally expensive, though sometimes it does not find the optimal solution.

Size of block, specified in pixels as a two-element vector.

Maximum displacement search, specified as a two-element vector. Specify the maximum number of pixels that any center pixel in a block of pixels can move, from image to image or from frame to frame. The block matcher object uses this property to determine the size of the search region.

Match criteria between blocks, specified as 'Mean square error (MSE)' or 'Mean absolute difference (MAD').

Motion output form, specified as 'Magnitude-squared' or 'Horizontal and vertical components in complex form'.

Input image subdivision overlap, specified in pixels as a two-element vector.

Fixed-Point Properties

Rounding method for fixed-point operations, specified as 'Floor', 'Ceiling', 'Convergent', 'Nearest' , 'Round' , 'Simplest' , or 'Zero'.

Action to take when integer input is out-of-range, specified as 'Wrap' or 'Saturate'.

Product data type, specified as 'Same as input' or 'Custom'.

Product word and fraction lengths, specified as a scaled numerictype (Fixed-Point Designer) object. This property applies only when you set the AccumulatorDataType property to 'Custom'.

Data type of accumulator, specified as 'Same as product', 'Same as input', or 'Custom'.

Accumulator word and fraction lengths, specified as a scaled numerictype (Fixed-Point Designer) object. This property applies only when you set the AccumulatorDataType property to 'Custom'.

Usage

Description

example

V = blkMatcher(I) computes the motion of input image I from one video frame to another, and returns V as a matrix of velocity magnitudes.

C = blkMatcher(I) computes the motion of input image I from one video frame to another, and returns C as a complex matrix of horizontal and vertical components, when you set the OutputValue property to Horizontal and vertical components in complex form.

Y = blkMatcher(I,iref) computes the motion between input image I and reference image iref when you set the ReferenceFrameSource property to Input port.

Input Arguments

expand all

Input data, specified as a scalar, vector, or matrix of intensity values.

Input reference data, specified as a scalar, vector, or matrix of intensity values.

Output Arguments

expand all

Velocity magnitudes, returned as a matrix.

Horizontal and vertical components, returned as a complex matrix.

Motion between image and reference image, returned as a matrix.

Object Functions

To use an object function, specify the System object™ as the first input argument. For example, to release system resources of a System object named obj, use this syntax:

release(obj)

expand all

stepRun System object algorithm
releaseRelease resources and allow changes to System object property values and input characteristics
resetReset internal states of System object

Examples

collapse all

Read and convert RGB image to grayscale.

img1 = im2double(im2gray(imread('onion.png')));

Create a block matcher and alpha blender object.

hbm = vision.BlockMatcher('ReferenceFrameSource',...
        'Input port','BlockSize',[35 35]);
hbm.OutputValue = 'Horizontal and vertical components in complex form';
halphablend = vision.AlphaBlender;

Offset the first image by [5 5] pixels to create a second image.

img2 = imtranslate(img1,[5,5]);

Compute motion for the two images.

motion = hbm(img1,img2);

Blend the two images.

img12 = halphablend(img2,img1);

Use a quiver plot to show the direction of motion on the images.

[X,Y] = meshgrid(1:35:size(img1,2),1:35:size(img1,1));         
imshow(img12)
hold on
quiver(X(:),Y(:),real(motion(:)),imag(motion(:)),0)
hold off

Version History

Introduced in R2012a