Instance Segmentation Using Mask R-CNN Deep Learning encountered array size mismatch error
3 vues (au cours des 30 derniers jours)
Afficher commentaires plus anciens
I have followed the instructions mentioned https://uk.mathworks.com/help/deeplearning/ug/instance-segmentation-using-mask-rcnn.html. Not sure what to do with errors. Complete code attached below. Please guide
% %Create directories to store the COCO training images and annotation data.
% imageFolder = fullfile('dataFolder',"images");
% captionsFolder = fullfile('dataFolder',"annotations");
% if ~exist('imageFolder','dir')
% mkdir('imageFolder')
% mkdir('captionsFolder')
% end
%
% annotationFile = fullfile(captionsFolder,"instances_train2014.json");
% str = fileread(annotationFile);
%
% cocoAPIDir = fullfile('dataFolder',"cocoapi-master","MatlabAPI");
% addpath(cocoAPIDir);
%
% unpackAnnotationDir = fullfile('dataFolder',"annotations_unpacked","matFiles");
% if ~exist(unpackAnnotationDir,'dir')
% mkdir(unpackAnnotationDir)
% end
%
trainClassNames = {'person','car'};
% helper.unpackAnnotations(trainClassNames,annotationFile,imageFolder,unpackAnnotationDir);
ds = fileDatastore(unpackAnnotationDir, ...
'ReadFcn',@(x)helper.cocoAnnotationMATReader(x,imageFolder));
imageSize = [800 800 3];
dsTrain = transform(ds,@(x)helper.preprocessData(x,imageSize));
data = preview(dsTrain);
numClasses = length(trainClassNames)-1;
params = createMaskRCNNConfig(imageSize,numClasses,trainClassNames);
netFasterRCNN = fasterRCNNLayers(params.ImageSize,numClasses,params.AnchorBoxes,'resnet101');
netMaskRCNN = createMaskRCNN(netFasterRCNN,1,params);
dlnet = dlnetwork(netMaskRCNN);
%deepNetworkDesigner(netMaskRCNN)
initialLearnRate = 0.01;
momentum = 0.9;
decay = 0.0001;
velocity = [];
maxEpochs = 30;
miniBatchSize = 2;
miniBatchFcn = @(img,boxes,labels,masks) deal(cat(4,img{:}),boxes,labels,masks);
mbqTrain = minibatchqueue(dsTrain,4, ...
"MiniBatchFormat",["SSCB","","",""], ...
"MiniBatchSize",miniBatchSize, ...
"OutputCast",["single","","",""], ...
"OutputAsDlArray",[true,false,false,false], ...
"MiniBatchFcn",miniBatchFcn, ...
"OutputEnvironment",["auto","cpu","cpu","cpu"]);
doTraining = true;
if doTraining
iteration = 1;
start = tic;
% Create subplots for the learning rate and mini-batch loss
fig = figure;
[lossPlotter] = helper.configureTrainingProgressPlotter(fig);
% Initialize verbose output
helper.initializeVerboseOutput([]);
% Custom training loop
for epoch = 1:maxEpochs
reset(mbqTrain)
shuffle(mbqTrain)
while hasdata(mbqTrain)
% Get next batch from minibatchqueue
[X,gtBox,gtClass,gtMask] = next(mbqTrain);
% Evaluate the model gradients and loss
[gradients,loss,state] = dlfeval(@networkGradients,X,gtBox,gtClass,gtMask,dlnet,params);
dlnet.State = state;
% Compute the learning rate for the current iteration
learnRate = initialLearnRate/(1 + decay*iteration);
if(~isempty(gradients) && ~isempty(loss))
[dlnet.Learnables,velocity] = sgdmupdate(dlnet.Learnables,gradients,velocity,learnRate,momentum);
else
continue;
end
helper.displayVerboseOutputEveryEpoch(start,learnRate,epoch,iteration,loss);
% Plot loss/accuracy metric
D = duration(0,0,toc(start),'Format','hh:mm:ss');
addpoints(lossPlotter,iteration,double(gather(extractdata(loss))))
subplot(2,1,2)
title(strcat("Epoch: ",num2str(epoch),", Elapsed: "+string(D)))
drawnow
iteration = iteration + 1;
end
end
net = dlnet;
% Save the trained network
modelDateTime = string(datetime('now','Format',"yyyy-MM-dd-HH-mm-ss"));
save(strcat("trainedMaskRCNN-",modelDateTime,"-Epoch-",num2str(maxEpochs),".mat"),'net');
end
0 commentaires
Réponses (2)
Anshika Chaurasia
le 14 Sep 2021
Hi Sardar Ali,
You can resolve the error by replacing the following line:
numClasses = length(trainClassNames)-1;
with
numClasses = length(trainClassNames);
Hope it helps!
0 commentaires
Voir également
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!