why can't I get the correct results when performing classification on googlenet model

2 vues (au cours des 30 derniers jours)
i have trained my model using googlenet and it depicted 93% accuracy for disease detection, but after that when i perform classification then the classifier predicted the wrong labels along with very less accuracy which is 37.3%. i have used https://www.mathworks.com/help/deeplearning/ug/classify-image-using-googlenet.html and https://www.mathworks.com/help/vision/ug/image-category-classification-using-deep-learning.html for classifying my image dataset but both have not worked for me. Can u help me where I am going wrong
%net = googlenet;
inputSize = net.Layers(1).InputSize
classNames = net.Layers(end).ClassNames;
numClasses = numel(classNames);
disp(classNames(randperm(numClasses,10)))
im = imread("D:\dataset\processed\Alternaria fliph\ (9).jpeg");
figure
imshow(im)
size(im)
im = imresize(im,inputSize(1:2));
figure
imshow(im)
[label,scores] = classify(net,im);
label
figure
imshow(im)
title(string(label) + ", " + num2str(100*scores(classNames == label),3) + "%");
[~,idx] = sort(scores,'descend');
idx = idx(5:-1:1);
classNamesTop = net.Layers(end).ClassNames(idx);
scoresTop = scores(idx);
figure
barh(scoresTop)
xlim([0 1])
title('Top 5 Predictions')
xlabel('')
yticklabels(classNamesTop)

Réponse acceptée

Walter Roberson
Walter Roberson le 25 Déc 2022
You are taking a network trained to detect objects and you are trying to use it to detect a concept ("diseased") . It should not be surprising if the resulting network focuses on figuring out what object the input image most closely resembles.
Generally speaking, if you have high training accuracy but low testing accuracy, that often indicates that you overclassified . Not enough different inputs, not enough variations in the inputs.
There is an old story about how a branch of the US military achieved 100% success in training a network to distinguish US fighter aircraft from Russian fighter aircraft, but that when the network was deployed, it was a complete failure. It turned out that all of the training images for the US aircraft were pointing towards the right, and all of the training images of the Russian aircraft were pointing towards the left, and so what the network had effectively ended up training on was the orientation of the aircraft rather than any detail of the aircraft. It was overtrained on one kind of input image.

Plus de réponses (0)

Catégories

En savoir plus sur Image Data Workflows dans Help Center et File Exchange

Produits


Version

R2019b

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by