Transfer Learning on Unet: Image Input Size not matching for layers
4 vues (au cours des 30 derniers jours)
Afficher commentaires plus anciens
Hello,
I am trying to retrain a couple layers of the U-net architecture with new data. However, some of the layers have a different input size and therefore, are giving me an error. How do I change the input to the layers without messing up the U-net architecture?
load SimNet1.mat
Net2 = SimNet1; %renaming U-Net
analyzeNetwork(Net2)
plot(Net2)
layers = Net2.Layers;
lgraph = layerGraph(layers)
lgraph = connectLayers(lgraph,'Encoder-Stage-1-ReLU-2','Decoder-Stage-4-DepthConcatenation/in2')
lgraph = connectLayers(lgraph,'Encoder-Stage-2-ReLU-2','Decoder-Stage-3-DepthConcatenation/in2')
lgraph = connectLayers(lgraph,'Encoder-Stage-3-ReLU-2','Decoder-Stage-2-DepthConcatenation/in2')
lgraph = connectLayers(lgraph, 'Encoder-Stage-4-ReLU-2','Decoder-Stage-1-DepthConcatenation/in2')
figure;
plot(lgraph)
%Real US Directories - Transfer Learning Images (260)
segDir = fullfile(%Fire Location); % Segmentations
USDir = fullfile(%File Location); % Labels = US Images
imds = imageDatastore(USDir); %DataStore of input training images - ultrasound images
classNames = ["bone","background"]; %labels
labelIDs = [1 0];
pxds = pixelLabelDatastore(segDir,classNames,labelIDs);
larray = [convolution2dLayer([1 1], 2,'NumChannels',64,'NumFilters',2,'Name','NewFinalConvLayer')];
lgraph = replaceLayer(lgraph,'Final-ConvolutionLayer',larray);
larray2 = pixelClassificationLayer('Name','NewPixelClassificationLayer','Classes',["bone" "background"]);
lgraph = replaceLayer(lgraph,'Segmentation-Layer',larray2);
larray3 = softmaxLayer('Name','NewSoftMaxLayer');
lgraph = replaceLayer(lgraph,'Softmax-Layer',larray3);
%Error is occuring for the next two layers
larray4 = [convolution2dLayer([3 3], 2,'NumChannels',128,'NumFilters',64,'Name','NewDecoderStage41layer')];
lgraph = replaceLayer(lgraph,'Decoder-Stage-4-Conv-1',larray4);
larray5 = [convolution2dLayer([3 3], 2,'NumChannels',64,'NumFilters',64,'Name','NewDecoderStage42Layer')];
lgraph = replaceLayer(lgraph,'Decoder-Stage-4-Conv-2',larray5);
figure;
plot(lgraph)
options = trainingOptions('adam','InitialLearnRate', 3e-4, ...
'MaxEpochs',100,'MiniBatchSize',15, ...
'Plots','training-progress','Shuffle','every-epoch');
ds = pixelLabelImageDatastore(imds,pxds) %returns a datastore based on input image data(imds - US images)
%and pxds (required network output - segmentations)
TLNet7 = trainNetwork(ds,lgraph,options)
save TLNet7
2 commentaires
Srivardhan Gadila
le 16 Mar 2020
@Hridayi can you provide the values of the imageSize, numClasses & 'EncoderDepth' of your above U-Net network and also can you attach the SimNet1.mat file?
Réponse acceptée
Srivardhan Gadila
le 17 Mar 2020
Based on the information on imageSize, numClasses & 'EncoderDepth', for the Net2 in your question add the value 'same' for the 'padding' Name-Value pair argument as follows for the NewFinalConvLayer, NewDecoderStage41layer & NewDecoderStage42Layer:
larray = [convolution2dLayer([1 1], 2,'NumChannels',64,'NumFilters',2,'Name','NewFinalConvLayer','Padding','same')];
lgraph = replaceLayer(lgraph,'Final-ConvolutionLayer',larray);
larray2 = pixelClassificationLayer('Name','NewPixelClassificationLayer','Classes',["bone" "background"]);
lgraph = replaceLayer(lgraph,'Segmentation-Layer',larray2);
larray3 = softmaxLayer('Name','NewSoftMaxLayer');
lgraph = replaceLayer(lgraph,'Softmax-Layer',larray3);
%Error is occuring for the next two layers
larray4 = [convolution2dLayer([3 3], 2,'NumChannels',128,'NumFilters',64,'Name','NewDecoderStage41layer','Padding','same')];
lgraph = replaceLayer(lgraph,'Decoder-Stage-4-Conv-1',larray4);
larray5 = [convolution2dLayer([3 3], 2,'NumChannels',64,'NumFilters',64,'Name','NewDecoderStage42Layer','Padding','same')];
lgraph = replaceLayer(lgraph,'Decoder-Stage-4-Conv-2',larray5);
analyzeNetwork(lgraph)
This would maintain the same U-Net structure but with the replaced layers and should no longer cause the mentioned error.
0 commentaires
Plus de réponses (1)
Samia OUKIL
le 1 Juin 2020
How to segment color image(Skin lesion) with Unet and transfert learning?
Hello, I am a beginner in deep learning! So,I have a medical image database (Skin lesion) to segment with U-net and transfer learning! For that, I downloaded " u-net-release-2015-10-02.tar.gz"(https://lmb.informatik.uni-freiburg.de/people/ronneber/u-net/) but I don't know how to segment my database!
Please, if you can help and guide me to segment with Unet and how to use transfer learning?
3 commentaires
Voir également
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!