Hi there!
I am working with LiDAR data gahtered using an Ouster sensor and Matlab's "Computer Vision Toolbox".
I would like to plot a recorded point cloud as a "movie", i.e. showing the different frames one after the other.
Since the "VelodyneReader" does not work for Ouster data, I tried to edit the point cloud manually, i.e. I tried to merge different frames of a point cloud to one single point cloud. Therefore, I tried to access the 'Location' in the Point Cloud using 'ptCloud.Location'. However, the 'Location' is a read-only property.
Does any one know how to change the 'Location' property of a 'pointCloud' from read-only to something that I can edit?
Thanks a lot for your help!
Cheers,
Raffi

 Réponse acceptée

Harsha Priya Daggubati
Harsha Priya Daggubati le 6 Avr 2020

0 votes

I guess it would be better if you can access the Location property and store it in a MATLAB variable, make desired modifications and create a new point-cloud object using the modified points.

7 commentaires

Thanks for you answer.
I don't fully understand what you mean.
I am not trying to create multiple "Point Cloud" objects, I am just trying to put different frames of an Ouster LiDAR scan into one and the same "Point Cloud" object, but as different "layers" in the 'ptCloud.Location' place.
I think that is what is done by the VelodyneReader automatically, but since I am not using Velodyne but Ouster, I would like to do this manually.
I would like to load the different frames into single point clouds, then take each of this point clouds and put them as a different "layer" in the .Location place of a "Master" point cloud.
Something like:
% loading frames into individual Point Clouds
ptCloud1 = pointCloud(Frame1);
ptCloud2 = pointCloud(Frame2);
ptCloud3 = pointCloud(Frame3);
% ...
% Putting all the indiviudal point clouds into different .Locations of Master point cloud
ptCloud.Location(:,:,1)=ptCloud1.Location; %ptCloud = Master Point Cloud
ptCloud.Location(:,:,2)=ptCloud2.Location;
ptCloud.Location(:,:,3)=ptCloud3.Location;
% ...
% the last three lines do not work:
% ERROR: You cannot set the read-only property 'Location' of pointCloud.
If I get the point cloud "ptCloud" with all the different frames in the .Location places, I could take the "Master" point cloud and "play the movie" using pcplayer and view.
I hope this helps. Thank you so much for your help!
Hi, Can you try doing this
% loading frames into individual Point Clouds
ptCloud1 = pointCloud(Frame1);
ptCloud2 = pointCloud(Frame2);
ptCloud3 = pointCloud(Frame3);
% ...
% Putting all the indiviudal point clouds into different .Locations of Master point cloud
Location(:,:,1)=ptCloud1.Location; %ptCloud = Master Point Cloud
Location(:,:,2)=ptCloud2.Location;
Location(:,:,3)=ptCloud3.Location;
% ...
ptCloud = pointCloud(Location);
Raffaele Spielmann
Raffaele Spielmann le 6 Avr 2020
Hi,
Thanks a lot for your help. Unfortunately, it still does not work and multiple Error messages are showing up:
Error using pointclouds.internal.pc.pointCloudImpl>validateAndParseInputs (line 1420)
The value of 'xyzPoints' is invalid. Expected input 'xyzPoints' to be of size M-by-3 or M-by-N-by-3.
Error in pointclouds.internal.pc.pointCloudImpl (line 69)
[xyzPoints, C, nv, I] = validateAndParseInputs(varargin{:});
Error in pointCloud (line 109)
this = this@pointclouds.internal.pc.pointCloudImpl(varargin{:});
Error in LiDAR_Processing_Merging_ptClouds (line 15)
ptCloud = pointCloud(Location);
Do you know what is the problem?
Thanks again for your help and patience!
Cheers
Hi,
It is clear that there is a limitation on the dimensions of 'xyz', in your case Location. It expects the size to be M-by-3 or M-by-N-by-3. I guess you have more than 3 frames. Can you try reshaping your Location matrix to a size, if you think it serves your purpose.
Also, my suggestion would be to append the location data from various point clouds into a M*3 matrix, instead of increasing the dimensions.
Location = [ptCloud1.Location ; ptCloud2.Location ; ptCloud3.Location]
Hope this helps!
Hi,
Thanks for your help.
I think things are getting better now. However, maybe I missunderstood something.
The .Location of the point clouds exepcts a M-by-N-by-3 array. Is M basically the number of frames, N the number of data points per frame and 3 equals 'xyz' of each frame, is this right?
Let's say I have 32 frames, so
ptCloud1 = pointCloud(Frame1);
% ...
ptCloud32 = pointCloud(Frame32);
so I would also have 32 .Locations:
pointCloud1.Location
%...
pointCloud32.Location
What exactly would be the easiest way to put all this different Locations into one single point cloud which would contain all the frames as different "pages" in the ptCloud.Location place?
If I had that, I could just take the ptCloud (with all 32 frames) and "play the movie"...(pcplayer, view)
Thanks a lot!
aws hawary
aws hawary le 13 Avr 2020
Hello
I have a question if anyone could help me
I got Data LIDAR files ( pcap and csv)
I wanna to apply that data but the files should be compatible with MATLAB platform .. How can I read all the frames of ply file .. I read the total of pcap but I need the frames #separately for each timestamp
Hi,
Just try loading the files as .csv as different frames into different point clouds:
Frame62 = readtable('Trial3_Gravel_Frame62.csv');
Frame68 = readtable('Trial3_Gravel_Frame68.csv');
Frame62n = Frame62(:, [1:3]); %creates table with columns 1:3 = xyz-coord.
Frame68n = Frame68(:, [1:3]);
Frame62n = table2array(Frame62n); %converts table to array
Frame68n = table2array(Frame68n);
%Creating Point Clouds with Intensity:
ptCloud62 = pointCloud(Frame62n(:, 1:3));

Connectez-vous pour commenter.

Plus de réponses (2)

Sudarsono Sianipar
Sudarsono Sianipar le 14 Mai 2020

0 votes

Hello, i have trouble to moving pointcloud manually with editing .Location pointcloud. but enable, because "You cannot set the read-only property 'Location' of pointCloud.".
How to change that?
Thank You

1 commentaire

Walter Roberson
Walter Roberson le 14 Mai 2020
Unfortunately it is set as readonly in the class definition toolbox/shared/pointclouds/+pointclouds/+internal/+pc/pointCloudImpl.m
In some modes, a kdtree is generated to search the point cloud; modifying the points would invalidate the kdtree.

Connectez-vous pour commenter.

kevin harianto
kevin harianto le 4 Avr 2022
I am also trying to edit the point cloud location as well however I am getting an issue with left and right side having different number of elements. Do you have any clue in what it may be?
classdef LidarSemanticSegmentation < lidar.labeler.AutomationAlgorithm
% LidarSemanticSegmentation Automation algorithm performs semantic
% segmentation in the point cloud.
% LidarSemanticSegmentation is an automation algorithm for segmenting
% a point cloud using SqueezeSegV2 semantic segmentation network
% which is trained on Pandaset data set.
%
% See also lidarLabeler, groundTruthLabeler
% lidar.labeler.AutomationAlgorithm.
% Copyright 2021 The MathWorks, Inc.
% ----------------------------------------------------------------------
% Step 1: Define the required properties describing the algorithm. This
% includes Name, Description, and UserDirections.
properties(Constant)
% Name Algorithm Name
% Character vector specifying the name of the algorithm.
Name = 'Lidar Semantic Segmentation';
% Description Algorithm Description
% Character vector specifying the short description of the algorithm.
Description = 'Segment the point cloud using SqueezeSegV2 network.';
% UserDirections Algorithm Usage Directions
% Cell array of character vectors specifying directions for
% algorithm users to follow to use the algorithm.
UserDirections = {['ROI Label Definition Selection: select one of ' ...
'the ROI definitions to be labeled'], ...
'Run: Press RUN to run the automation algorithm. ', ...
['Review and Modify: Review automated labels over the interval ', ...
'using playback controls. Modify/delete/add ROIs that were not ' ...
'satisfactorily automated at this stage. If the results are ' ...
'satisfactory, click Accept to accept the automated labels.'], ...
['Accept/Cancel: If the results of automation are satisfactory, ' ...
'click Accept to accept all automated labels and return to ' ...
'manual labeling. If the results of automation are not ' ...
'satisfactory, click Cancel to return to manual labeling ' ...
'without saving the automated labels.']};
end
% ---------------------------------------------------------------------
% Step 2: Define properties you want to use during the algorithm
% execution.
properties
% AllCategories
% AllCategories holds the default 'unlabelled', 'Vegetation',
% 'Ground', 'Road', 'RoadMarkings', 'SideWalk', 'Car', 'Truck',
% 'OtherVehicle', 'Pedestrian', 'RoadBarriers', 'Signs',
% 'Buildings' categorical types.
AllCategories = {'unlabelled'};
% PretrainedNetwork
% PretrainedNetwork saves the pretrained SqueezeSegV2 network.
PretrainedNetwork
end
%----------------------------------------------------------------------
% Note: this method needs to be included for lidarLabeler app to
% recognize it as using pointcloud
methods (Static)
% This method is static to allow the apps to call it and check the
% signal type before instantiation. When users refresh the
% algorithm list, we can quickly check and discard algorithms for
% any signal that is not support in a given app.
function isValid = checkSignalType(signalType)
isValid = (signalType == vision.labeler.loading.SignalType.PointCloud);
end
end
%----------------------------------------------------------------------
% Step 3: Define methods used for setting up the algorithm.
methods
function isValid = checkLabelDefinition(algObj, labelDef)
% Only Voxel ROI label definitions are valid for the Lidar
% semantic segmentation algorithm.
isValid = labelDef.Type == lidarLabelType.Voxel;
if isValid
algObj.AllCategories{end+1} = labelDef.Name;
end
end
function isReady = checkSetup(algObj)
% Is there one selected ROI Label definition to automate.
isReady = ~isempty(algObj.SelectedLabelDefinitions);
end
end
%----------------------------------------------------------------------
% Step 4: Specify algorithm execution. This controls what happens when
% the user presses RUN. Algorithm execution proceeds by first
% executing initialize on the first frame, followed by run on
% every frame, and terminate on the last frame.
methods
function initialize(algObj,~)
% Load the pretrained SqueezeSegV2 semantic segmentation network.
outputFolder = fullfile(tempdir, 'Pandaset');
pretrainedSqueezeSeg = load(fullfile(outputFolder,'trainedSqueezeSegV2PandasetNet.mat'));
% Store the network in the 'PretrainedNetwork' property of this object.
algObj.PretrainedNetwork = pretrainedSqueezeSeg.net;
end
function autoLabels = run(algObj, pointCloud)
% Setup categorical matrix with categories including
% 'Vegetation', 'Ground', 'Road', 'RoadMarkings', 'SideWalk',
% 'Car', 'Truck', 'OtherVehicle', 'Pedestrian', 'RoadBarriers',
% and 'Signs'.
autoLabels = categorical(zeros(size(pointCloud.Location,1), size(pointCloud.Location,2)), ...
0:12,algObj.AllCategories);
%A = zeros(10000,10000);
%filling in the minimum required resolution
% to meet the neural network's specification.
%(first iteration failed) pointCloud.Location = zeros(65,1856,5);
%Due to an error we must append the various point cloud data
%first.
Location = zeros(64,1856,5);
%next we can add in the ptCloud locations
% Location(:,:,1) = pointCloud.Location;
% Location = zeros(65,1856,5);
Location(:) = [pointCloud.Location]
%
ptCloud=pointCloud(Location);
%This will also be applied to the pointCloud Intensity levels
% as these are also analyzed by the machine learning algorithm.
%(Pushed aside for later modifications) pointCloud.Intensity = zeros(64,1865);
% Convert the input point cloud to five channel image.
I = helperPointCloudToImage(pointCloud);
% Predict the segmentation result.
predictedResult = semanticseg(I, algObj.PretrainedNetwork);
autoLabels(:) = predictedResult;
%using this area we would be able to continuously update the latest file on
% sending the output towards the CAN Network or atleast ensure that the
% item is obtainable
% This area would work the best.
%first we must
end
end
end
function helperDisplayLabelOverlaidPointCloud(I,predictedResult)
% helperDisplayLabelOverlaidPointCloud Overlay labels over point cloud object.
% helperDisplayLabelOverlaidPointCloud(I,predictedResult)
% displays the overlaid pointCloud object. I is the 5 channels organized
% input image. predictedResult contains pixel labels.
ptCloud = pointCloud(I(:,:,1:3),Intensity = I(:,:,4));
cmap = helperPandasetColorMap;
B = ...
labeloverlay(uint8(ptCloud.Intensity),predictedResult,Colormap = cmap,Transparency = 0.4);
pc = pointCloud(ptCloud.Location,Color = B);
ax = pcshow(pc);
set(ax,XLim = [-70 70],YLim = [-70 70])
zoom(ax,3.5)
end
function cmap = helperPandasetColorMap
cmap = [[30 30 30]; % Unlabeled
[0 255 0]; % Vegetation
[255 150 255]; % Ground
[237 117 32]; % Road
[255 0 0]; % Road Markings
[90 30 150]; % Sidewalk
[255 255 30]; % Car
[245 150 100]; % Truck
[150 60 30]; % Other Vehicle
[255 255 0]; % Pedestrian
[0 200 255]; % Road Barriers
[170 100 150]; % Signs
[255 0 255]]; % Building
cmap = cmap./255;
end
function image = helperPointCloudToImage(ptcloud)
% helperPointCloudToImage converts the point cloud to 5 channel image
image = ptcloud.Location;
image(:,:,5) = ptcloud.Intensity;
rangeData = iComputeRangeData(image(:,:,1),image(:,:,2),image(:,:,3));
image(:,:,4) = rangeData;
index = isnan(image);
image(index) = 0;
end
function rangeData = iComputeRangeData(xChannel,yChannel,zChannel)
rangeData = sqrt(xChannel.*xChannel+yChannel.*yChannel+zChannel.*zChannel);
end

7 commentaires

%Due to an error we must append the various point cloud data
%first.
Location = zeros(64,1856,5);
%next we can add in the ptCloud locations
% Location(:,:,1) = pointCloud.Location;
% Location = zeros(65,1856,5);
Location(:) = [pointCloud.Location]
Why are you initializing with zeros and then assigning to (:) ??
M-by-3 list of points | M-by-N-by-3 array for organized point cloud
Your Location array would be M x N x 5, which is not valid input for pointCloud()
Note that pointCloud() input cannot be specified as a x 5 array. If you have additional data such as color or intensity that must be specified by additional parameters.
kevin harianto
kevin harianto le 5 Avr 2022
I was trying to expand the array and then take in the pcd location array in order to make the resolution of the pointCloud to be [64x1856x5] minimumly. This is to meet the requirements for the image resolution for the semantic segmentation which is expecting a 64 channel lidar input which was originally [64x1856x5] and having the intensity level of [1856x5].
Walter Roberson
Walter Roberson le 5 Avr 2022
pointCloud() is not designed to handle multiple layers in a single pointCloud() object.
LidarSemanticSegmentationUsingSqueezeSegV2Example creates 64 x 1856 x 5 data. It calls helperTransformOrganizedPointCloudToTrainingData to do the work. That in turn calls helperPointCloudToImage which creates a 5 channel array by extracting the pointCloud location and Intensity and using the Location information to also calculate range information. Hop back a couple of levels and the 5 channel array gets saved as a .mat
kevin harianto
kevin harianto le 6 Avr 2022
got it, so instead of trying to change the pointCloud() object by adding in multiple layers, I will try to use helperPointCloudToImage(pointCloud); after adding in the additional array points.
kevin harianto
kevin harianto le 8 Avr 2022
Modifié(e) : kevin harianto le 8 Avr 2022
So i have been trying out several ways in modification of the location however for some reason all im getting is a consistency error in cat: -previously it didnt work-
tempPtCloud = cat(2, tempPtCloud.Location, zeros(10000, size(tempPtCloud.Location,2)));
my code:
% Load the pretrained network.
outputFolder = fullfile(tempdir,"Pandaset");
load(fullfile(outputFolder,"trainedSqueezeSegV2PandasetNet.mat"),"net");
% Read the point cloud.
%----------------------------Note if we have access to hardware resources
% we would run the function to start loading the files in.
%F = parfeval(backgroundPool, @hardwareManipulation);
%to pause if a must we can do
%cancel(f) however this action is planned to be in continuous cycle while
%the automation class properties is doing the activities
%-----------------------------
%Next we shall change the outputfolder to face towards the data_2 saved/created by
%the hardwware manipulation function
outputFolderNew=fullfile("data_2");
ptCloud = pcread(fullfile(outputFolderNew,"0000000000.pcd"));
%Due to the ptCloud being a different data type
tempPtCloud = ptCloud;
%ptCloud being read only we can not add the third dimension
%-------------from suggestion
tempPtCloud = cat(2, tempPtCloud.Location, zeros(10000, size(tempPtCloud.Location,2)));
If you are sure that the new size is larger than the old size, then skip that cat(2) and instead do
tempPtCloud(DesiredNumberOfRows, DesiredNumberOfColumns) = 0;
This will extend the tempPtCloud array to have that many rows and columns, filling all of the holes with 0 (the 0 used as fill is automatic and independent of the 0 being assigned in that statement.)
kevin harianto
kevin harianto le 11 Avr 2022
Modifié(e) : kevin harianto le 11 Avr 2022
I am only trying to extend the tempPtCloud.Location array which why i was trying to add in the tmpPtCloud.Location matrixes for expanding -currently at 123398x3 and move to atleast 64x1856x3-. when I try your method it said unable to perform assignment because type double is not convertible to pointCloud. error at tempPtCloud(64, 1856) = 0; and even when i tried to change it to tempPtCloud(64, 1856, 3) = 0;
%These additional values should allow the raw pcd file matric to meet the
%resolutions demand
% Since we are only trying to influence the resolution
% (pandaSet Ideal resolution being 64x1856x3)
% and not the actual
% value's representation we will only be adding in the values
% However because the raw pcd file is single matrix, we shall be adding
% in additional dimensions.
tempPtCloud = ptCloud;
%ptCloud being read only we can not add the third dimension
%-------------from suggestion(issue)
tempPtCloud(64, 1856) = 0;
tempPtCloud = cat(2, tempPtCloud.Location, zeros(10000, size(tempPtCloud.Location,2)));
%----------------
B = tempPtCloud.Location;
B(:,3) = tempPtCloud.Location(:,2); %copy second column to move to third
for n = 2: size(ptCloud.Location)
B(:,n)= tempPtCloud.Location(:,n-1);
end
%We can do the above twice to move the 2nd column to the third, and first
%to the second then add in the zeros to the first and replace.
B(:,2) = tempPtCloud.Location(:,1); %copy first column to move to second
for n = 2: size(ptCloud.Location)
B(:,n)= tempPtCloud.Location(:,n-1);
end
%for the first vertices we will be substituting with the zeros
B(:,1) = zeros(64);
%next replace the pointCloud with the desire parts:
% (In terms of replaceing the pointCloud with the desired parts this shall be implemented in
% the semantic segmentation script as well)
%using the location variable B which should contain the 3 Dimensions in its
%rightful format we shall now modify the temporary pointCloud for
%manipulation
tempPtCloud = pointCloud(B(:, 1:3));

Connectez-vous pour commenter.

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by