Prediction differs during training with result
2 vues (au cours des 30 derniers jours)
Afficher commentaires plus anciens
I have twelve weighted classes that I train with a large augmented training and validation pixelLabelImageDatastore.
Created with:
lgraph=deeplabv3plusLayers(imageSize, numel(classes), 'resnet18');
lgraph = replaceLayer(lgraph, "classification", pixelClassificationLayer('Name','labels','Classes',tbl.Name,'ClassWeights',classWeights));
lgraph = replaceLayer(lgraph, "data", imageInputLayer(imageSize,"Name","data","Normalization","none"));
The training accuracy converges very fine to about 99,3% (98,5% - 99,7) and the loss to about 0.05 (for both training and validation).
When I test the generated DAGNetwork with "jaccard", only the first ten classes have high IOU, and the last 2 are zero! I also tested different normalizations such as zscore - always the same result. When I use the "predict" or "semanticseg" functions to check individual images, classes 11 and 12 seem to be poorly learned indeed.
But if I set a breakpoint in the "forwardLoss" function in "SpatialCrossEntropy.m" during the training and examine e.g. class 11 with "imshow(Y(:,:,11))", everything is fine learned!
What happens in "trainNetwork()" when the training is finished? Under what circumstances do forwardLoss() scores differ?
4 commentaires
Abhijit Bhattacharjee
le 19 Mai 2022
There might be more specifics in your code that need to be addressed 1-1. I'd suggest submitting a technical support request.
Réponses (0)
Voir également
Catégories
En savoir plus sur Image Data Workflows dans Help Center et File Exchange
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!