What happend if the validation performance was greater than the test performance?
3 vues (au cours des 30 derniers jours)
Afficher commentaires plus anciens
I have trained several networks with different hidden neurons. Now my question is what happens if I select a network with the validation performance higher than the test performance.I have read somewhere that in this situation the distribution of data between the training set and test set is not correct.Thanks for any suggestion.
0 commentaires
Réponse acceptée
Greg Heath
le 28 Fév 2017
There is no rule governing the order of the val and tst performances. That is why it is worthwhile to design a number of nets differing only by the random initial weights and/or if not a time series, the order of the input/target pairs.
Thank you for formally accepting my answer
Greg
3 commentaires
Greg Heath
le 1 Mar 2017
Sometimes it is dangerous to rely on the results of just a few designs. That is why, for challenging cases, I typically design at least 10 or 15 nets for each candidate value of H, the number of hidden nodes.
Then by tabulating the 3 NMSE values with the column for NMSEval monotonically decreasing, you can detect how well trends in NMSEval can indicate the acceptability of NMSEtst.
Of course there are other tabulations and/or plots that reveal design trends( e.g., NMSE vs H). The important thing is to design enough nets so that you are convinced your choice is reliable.
Plus de réponses (0)
Voir également
Catégories
En savoir plus sur Deep Learning Toolbox dans Help Center et File Exchange
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!