triangulateMultiview

3-D locations of undistorted points matched across multiple images

Description

example

xyzPoints = triangulateMultiview(pointTracks,cameraPoses,intrinsics) returns locations of 3-D world points that correspond to points matched across multiple images taken with a calibrated camera.

[xyzPoints,reprojectionErrors] = triangulateMultiview(___) additionally returns an N-element vector, reprojectionErrors, that contains the mean reprojection error fo each 3-D world point.

Examples

collapse all

Load images.

imageDir = fullfile(toolboxdir('vision'),'visiondata',...
    'structureFromMotion');
images = imageSet(imageDir);

Load precomputed camera parameters.

data = load(fullfile(imageDir,'cameraParams.mat'));

Get camera intrinsic parameters.

intrinsics = data.cameraParams.Intrinsics;

Compute features for the first image.

I = rgb2gray(read(images,1));
I = undistortImage(I,intrinsics);
pointsPrev = detectSURFFeatures(I);
[featuresPrev,pointsPrev] = extractFeatures(I,pointsPrev);

Load camera locations and orientations.

load(fullfile(imageDir,'cameraPoses.mat'));

Create a viewSet object.

vSet = viewSet;
vSet = addView(vSet, 1,'Points',pointsPrev,'Orientation',...
    orientations(:,:,1),'Location',locations(1,:));

Compute features and matches for the rest of the images.

for i = 2:images.Count
  I = rgb2gray(read(images, i));
  I = undistortImage(I,intrinsics);
  points = detectSURFFeatures(I);
  [features,points] = extractFeatures(I,points);
  vSet = addView(vSet,i,'Points',points,'Orientation',...
      orientations(:,:,i),'Location',locations(i,:));
  pairsIdx = matchFeatures(featuresPrev,features,'MatchThreshold',5);
  vSet = addConnection(vSet,i-1,i,'Matches',pairsIdx);
  featuresPrev = features;
end

Find point tracks.

tracks = findTracks(vSet);

Get camera poses.

cameraPoses = poses(vSet);

Find 3-D world points.

[xyzPoints,errors] = triangulateMultiview(tracks,cameraPoses,intrinsics);
z = xyzPoints(:,3);
idx = errors < 5 & z > 0 & z < 20;
pcshow(xyzPoints(idx, :),'VerticalAxis','y','VerticalAxisDir','down','MarkerSize',30);
hold on
plotCamera(cameraPoses, 'Size', 0.2);
hold off

Input Arguments

collapse all

Matching points across multiple images, specified as an N-element array of pointTrack objects. Each element contains two or more points that match across multiple images.

Camera pose information, specified as a three-column table. The table contains columns for ViewId, Orientation, and Location. The view IDs correspond to the IDs in the pointTracks object. Specify the orientations as 3-by-3 rotation matrices and the locations as three-element vectors. You can obtain cameraPoses from a viewSet object by using its poses method.

Camera intrinsics, specified as a scalar or an M-element array of cameraIntrinsics objects. M is the number of camera poses. Use a scalar value when images are captured using the same camera. Use a vector when images are captured by different cameras.

Output Arguments

collapse all

3-D world points, specified as an N-by-3 array of [x,y,z] coordinates.

Data Types: single | double

Reprojection errors, returned as an N-by-1 vector. The function projects each world point back into both images. Then in each image, the function calculates the reprojection error as the distance between the detected and the reprojected point. The reprojectionErrors vector contains the average reprojection error for each world point.

Tips

Because triangulateMultiview does not account for lens distortion, you can undistort the images before detecting the points by using undistortImage. Alternatively, you can undistort the points directly using undistortPoints.

References

[1] Hartley, R. and A. Zisserman. "Multiple View Geometry in Computer Vision." Cambridge University Press, p. 312, 2003.

Introduced in R2016a