the difference of best validation point and final point at training process plot
4 vues (au cours des 30 derniers jours)
Afficher commentaires plus anciens
I trained a network with trainNetwork function like below.
[net, info] = trainNetwork(Xtrain, Ytrain, lgraph, options);
And the options were like below.
options = trainingOptions('adam', ...
'InitialLearnRate', 5e-06, ...
'MaxEpochs', 30, ...
'MiniBatchSize', 128,...
'ExecutionEnvironment', 'multi-gpu', ...
'ValidationData',{Xvalid,Yvalid}, ...
'ValidationFrequency', 10, ...
'ValidationPatience', inf, ...
'Shuffle', 'every-epoch', ...
'OutputNetwork', 'best-validation', ...
'Plots','training-progress' )
In this case, the 'final' point is displayed on the graph after learning is completed. However, the final point is very different from the graph's validation accuracy and validation loss. See below figure.
I don't understand this part. You can clearly see that the validation line marked final on the graph has an accuracy of between 85 and 90%, and the final point is below 80%. In the upper right corner of the figure, the validation accuracy is 76.8%.
Is this happening because of some option setting?
I am waiting for help from experts to find out why this is happening.
Please help.
2 commentaires
James
le 4 Sep 2023
It is common for test accuracy to be slighty lower than that of validation accuracy as the model can choose of multiple validation results for best performing model; this does not 100% guarantee performance in test dataset. That said, even with that the results above seem a bit out of norm.
There can be few explanations for this.
If model is overfitting
If BatchNormalization is used,
It'd be also good if you could share how train/validation/test datasets are prepared and how model is designed ('lgraph'). For example, if K-fold is used for validation, disparity between test accuracy will be quite large.
Réponses (1)
Gagan Agarwal
le 5 Sep 2023
Hi Yongwon Jang
The plot depicts a decline in validation accuracy after the training on the final iteration of the data, and due to the default setting of the ‘OutputNetwork’ training option as ‘last-iteration,’ the ‘Validation Accuracy’ field is being recorded as 76.8%.
The ‘OutputNetwork’ training option is not correctly assigned in the ‘option’ variable.
To obtain the Best Validation Loss as the ‘Validation Accuracy’, it is recommended to set the ‘OutputNetwork’ option to ‘best-validation-loss' rather than ‘best-validation.’
For a more comprehensive understanding of various optional parameters, you can refer to the following documentation: - https://www.mathworks.com/help/deeplearning/ref/trainingoptions.html
Voir également
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!