Prediction differs during training with result

2 vues (au cours des 30 derniers jours)
Christian Huggler
Christian Huggler le 16 Mai 2022
I have twelve weighted classes that I train with a large augmented training and validation pixelLabelImageDatastore.
Created with:
lgraph=deeplabv3plusLayers(imageSize, numel(classes), 'resnet18');
lgraph = replaceLayer(lgraph, "classification", pixelClassificationLayer('Name','labels','Classes',tbl.Name,'ClassWeights',classWeights));
lgraph = replaceLayer(lgraph, "data", imageInputLayer(imageSize,"Name","data","Normalization","none"));
The training accuracy converges very fine to about 99,3% (98,5% - 99,7) and the loss to about 0.05 (for both training and validation).
When I test the generated DAGNetwork with "jaccard", only the first ten classes have high IOU, and the last 2 are zero! I also tested different normalizations such as zscore - always the same result. When I use the "predict" or "semanticseg" functions to check individual images, classes 11 and 12 seem to be poorly learned indeed.
But if I set a breakpoint in the "forwardLoss" function in "SpatialCrossEntropy.m" during the training and examine e.g. class 11 with "imshow(Y(:,:,11))", everything is fine learned!
What happens in "trainNetwork()" when the training is finished? Under what circumstances do forwardLoss() scores differ?
  4 commentaires
Christian Huggler
Christian Huggler le 19 Mai 2022
Does that mean that "trainNetwork()" is useless and that a separate training procedure has to be made?
Abhijit Bhattacharjee
Abhijit Bhattacharjee le 19 Mai 2022
There might be more specifics in your code that need to be addressed 1-1. I'd suggest submitting a technical support request.

Connectez-vous pour commenter.

Réponses (0)

Catégories

En savoir plus sur Image Data Workflows dans Help Center et File Exchange

Produits


Version

R2022a

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by