An error when run Unet Model Deep learning

3 vues (au cours des 30 derniers jours)
mohd akmal masud
mohd akmal masud le 23 Juin 2022
Réponse apportée : Ben le 23 Juin 2022
Dear all,
I was develop my network thru apps design. then generate code. Like below.
But I got error.
clc
clear all
close all
%testDataimages
DATASetDir = fullfile('C:\Users\Akmal\Desktop\NEW 3D U NET 128X128');
IMAGEDir = fullfile(DATASetDir,'ImagesTr');
volReader = @(x) matRead(x);
volds = imageDatastore(IMAGEDir, ...
'FileExtensions','.mat','ReadFcn',volReader);
% labelReader = @(x) matread(x);
matFileDir = fullfile('C:\Users\Akmal\Desktop\NEW 3D U NET 128X128\LabelsTr');
classNames = ["background", "tumor"];
pixelLabelID = [0 1];
% pxds = (LabelDirr,classNames,pixelLabelID, ...
% 'FileExtensions','.mat','ReadFcn',labelReader);
pxds = pixelLabelDatastore(matFileDir,classNames,pixelLabelID, ...
'FileExtensions','.mat','ReadFcn',@matRead);
ds = pixelLabelImageDatastore(volds,pxds);
volume = preview(volds);
label = preview(pxds);
patchSize = [128 128 64];
patchPerImage = 16;
miniBatchSize = 8;
patchds = randomPatchExtractionDatastore(volds,pxds,patchSize, ...
'PatchesPerImage',patchPerImage);
patchds.MiniBatchSize = miniBatchSize;
dsTrain = transform(patchds,@augment3dPatch);
volLocVal = fullfile('C:\Users\Akmal\Desktop\NEW 3D U NET 128X128\imagesVal');
voldsVal = imageDatastore(volLocVal, ...
'FileExtensions','.mat','ReadFcn',volReader);
lblLocVal = fullfile('C:\Users\Akmal\Desktop\NEW 3D U NET 128X128\labelsVal');
pxdsVal = pixelLabelDatastore(lblLocVal,classNames,pixelLabelID, ...
'FileExtensions','.mat','ReadFcn',volReader);
dsVal = randomPatchExtractionDatastore(voldsVal,pxdsVal,patchSize, ...
'PatchesPerImage',patchPerImage);
dsVal.MiniBatchSize = miniBatchSize;
lgraph = layerGraph();
tempLayers = [
image3dInputLayer([128 128 64 1],"Name","image3dinput")
convolution3dLayer([3 3 3],64,"Name","Encoder-Stage-1-Conv-1","Padding","same")
reluLayer("Name","Encoder-Stage-1-ReLU-1_1")
%126x126
convolution3dLayer([3 3 3],64,"Name","Encoder-Stage-1-Conv-2","Padding","same")
reluLayer("Name","Encoder-Stage-1-ReLU-1_2")];
%124x124
lgraph = addLayers(lgraph,tempLayers);
tempLayers = [
maxPooling3dLayer([2 2 2],"Name","Encoder-Stage-1-MaxPool_1","Padding","same")
%62x62
convolution3dLayer([3 3 3],128,"Name","Encoder-Stage-1-MaxPool_2","Padding","same")
reluLayer("Name","Encoder-Stage-2-ReLU-1")
%60x60
convolution3dLayer([3 3 3],128,"Name","Encoder-Stage-2-Conv-2","Padding","same")
reluLayer("Name","Encoder-Stage-2-ReLU-2")];
%58x58
lgraph = addLayers(lgraph,tempLayers);
tempLayers = [
maxPooling3dLayer([2 2 2],"Name","Encoder-Stage-2-MaxPool","Padding","same")
%29x29
convolution3dLayer([3 3 3],256,"Name","Encoder-Stage-3-Conv-1","Padding","same")
reluLayer("Name","Encoder-Stage-3-ReLU-1")
%27x27
convolution3dLayer([3 3 3],256,"Name","conv3d","Padding","same")
reluLayer("Name","Encoder-Stage-3-ReLU-2")];
%25x25
lgraph = addLayers(lgraph,tempLayers);
tempLayers = [
dropoutLayer(0.5,"Name","Encoder-Stage-3-DropOut")
maxPooling3dLayer([2 2 2],"Name","Encoder-Stage-3-MaxPool_1","Padding","same")
%12x12
convolution3dLayer([3 3 3],512,"Name","Bridge-Conv-1","Padding","same")
reluLayer("Name","Bridge-ReLU-1")
%10x10
convolution3dLayer([3 3 3],512,"Name","Bridge-Conv-2","Padding","same")
reluLayer("Name","Bridge-ReLU-2")
%8x8
dropoutLayer(0.5,"Name","Bridge-DropOut")
transposedConv3dLayer([2 2 2],256,"Name","Decoder-Stage-1-UpConv","Cropping","same")
reluLayer("Name","Decoder-Stage-1-UpReLU")];
lgraph = addLayers(lgraph,tempLayers);
tempLayers = [
depthConcatenationLayer(2,"Name","Decoder-Stage-1-DepthConcatenation")
convolution3dLayer([3 3 3],256,"Name","Decoder-Stage-1-Conv-1","Padding","same")
reluLayer("Name","Decoder-Stage-1-ReLU-1")
convolution3dLayer([3 3 3],256,"Name","Decoder-Stage-1-Conv-2","Padding","same")
reluLayer("Name","Decoder-Stage-1-ReLU-2")
transposedConv3dLayer([2 2 2],128,"Name","Decoder-Stage-2-UpConv","Cropping","same")
reluLayer("Name","Decoder-Stage-2-UpReLU")];
lgraph = addLayers(lgraph,tempLayers);
tempLayers = [
depthConcatenationLayer(2,"Name","Decoder-Stage-2-DepthConcatenation")
convolution3dLayer([3 3 3],128,"Name","Decoder-Stage-2-Conv-1","Padding","same")
reluLayer("Name","Decoder-Stage-2-ReLU-1")
convolution3dLayer([3 3 3],128,"Name","Decoder-Stage-2-Conv-2","Padding","same")
reluLayer("Name","Decoder-Stage-2-ReLU-2")
transposedConv3dLayer([2 2 2],64,"Name","Decoder-Stage-3-UpConv","Cropping","same")
reluLayer("Name","Decoder-Stage-3-UpReLU")];
lgraph = addLayers(lgraph,tempLayers);
tempLayers = [
depthConcatenationLayer(2,"Name","Decoder-Stage-3-DepthConcatenation")
convolution3dLayer([3 3 3],64,"Name","Decoder-Stage-3-Conv-1","Padding","same")
reluLayer("Name","Decoder-Stage-3-ReLU-1")
convolution3dLayer([3 3 3],64,"Name","Decoder-Stage-3-Conv-2","Padding","same")
reluLayer("Name","Decoder-Stage-3-ReLU-2")
convolution3dLayer([2 2 2],32,"Name","Final-ConvolutionLayer","Padding","same")
softmaxLayer("Name","softmax")
pixelClassificationLayer("Name","pixel-class")];
lgraph = addLayers(lgraph,tempLayers);
% clean up helper variable
clear tempLayers;
lgraph = connectLayers(lgraph,"Encoder-Stage-1-ReLU-1_2","Encoder-Stage-1-MaxPool_1");
lgraph = connectLayers(lgraph,"Encoder-Stage-1-ReLU-1_2","Decoder-Stage-3-DepthConcatenation/in2");
lgraph = connectLayers(lgraph,"Encoder-Stage-2-ReLU-2","Encoder-Stage-2-MaxPool");
lgraph = connectLayers(lgraph,"Encoder-Stage-2-ReLU-2","Decoder-Stage-2-DepthConcatenation/in2");
lgraph = connectLayers(lgraph,"Encoder-Stage-3-ReLU-2","Encoder-Stage-3-DropOut");
lgraph = connectLayers(lgraph,"Encoder-Stage-3-ReLU-2","Decoder-Stage-1-DepthConcatenation/in1");
lgraph = connectLayers(lgraph,"Decoder-Stage-1-UpReLU","Decoder-Stage-1-DepthConcatenation/in2");
lgraph = connectLayers(lgraph,"Decoder-Stage-2-UpReLU","Decoder-Stage-2-DepthConcatenation/in1");
lgraph = connectLayers(lgraph,"Decoder-Stage-3-UpReLU","Decoder-Stage-3-DepthConcatenation/in1");
plot(lgraph);
maxEpochs = 10;
options = trainingOptions('adam', ...
'MaxEpochs',maxEpochs, ...
'InitialLearnRate',1e-3, ...
'LearnRateSchedule','piecewise', ...
'LearnRateDropPeriod',5, ...
'LearnRateDropFactor',0.97, ...
'ValidationData',dsVal, ...
'ValidationFrequency',50, ...
'Plots','training-progress', ...
'Verbose',false, ...
'MiniBatchSize',miniBatchSize);
doTraining = true;
if doTraining
modelDateTime = datestr(now,'dd-mmm-yyyy-HH-MM-SS');
[net,info] = trainNetwork(dsTrain,lgraph,options);
save(['trained3DUNet-' modelDateTime '-Epoch-' num2str(maxEpochs) '.mat'],'net');
else
load('trained3DVNet-07-Jun-2022-13-45-30-Epoch-250.mat');
end
ERROR
Error using trainNetwork
Invalid training data. The output size (32) of the last layer does not match the number of
classes of the responses (2).
Error in basicunetmodel3D (line 170)
[net,info] = trainNetwork(dsTrain,lgraph,options);
Anyone can help me?
  2 commentaires
KSSV
KSSV le 23 Juin 2022
It is a problem with input and target. What are the dimensions of input and target?How they are?
mohd akmal masud
mohd akmal masud le 23 Juin 2022
My input size is 130x130x64

Connectez-vous pour commenter.

Réponse acceptée

Ben
Ben le 23 Juin 2022
You should set the number of filters on the final convolution layer to match the number of pixel classes for your problem. Tt looks like you have 2 classes, so I would swap convolution3dLayer([2 2 2],32,"Name","Final-ConvolutionLayer","Padding","same") for convolution3dLayer([2 2 2],2,"Name","Final-ConvolutionLayer","Padding","same").
You might also be able to use unetLayers to construct your network, which allows you to specify the numClasses as an input.

Plus de réponses (0)

Catégories

En savoir plus sur Image Data Workflows dans Help Center et File Exchange

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by