In evaluating a neural net, should NMSE be based only on test subset of data?

3 vues (au cours des 30 derniers jours)
KAE
KAE le 15 Mai 2019
Commenté : Greg Heath le 27 Mai 2019
In answers like this, Greg Heath suggests using the normalized mean square error, NMSE, to compare the performance of different neural networks and pick the best one.
I have been calculating NMSE from all samples in the training set t and prediction y,
[net tr y e ] = train(net,x,t); % Train network
vart1 = var(t',1);
% MSE for a naive constant output model
% that always outputs average of target data
MSE00 = mean(vart1);
NMSE = mse(t-y)/MSE00; % Normalize
That includes the training samples, and so may favor models that fit the training data well but not new data. In order to choose the most robust model, should I calculate NMSE from the test samples only?
iTest = tr.testInd; % Index to the samples that were set aside for testing
NMSE_test_only = mse(t(:,iTest)-y(:,iTest))/MSE00; % Only use test samples

Réponse acceptée

Greg Heath
Greg Heath le 19 Mai 2019
For serious work I calulate FOUR values of NMSE:
1.70% Training
2.15% Validation
3.15% Test
4.100% All
for 10 (typically) random data divisions & initial weights and try to use as few hidden nodes as possible.
Hope this helps
Greg
  2 commentaires
KAE
KAE le 20 Mai 2019
Modifié(e) : KAE le 21 Mai 2019
Once you have those 4 values of NMCE, do you pick the 'best' number of neurons (or whatever network feature you are optimizing) based on the net which has the highest test NMSE, averaged over the 10 trials?
Greg Heath
Greg Heath le 27 Mai 2019
Typically, I try to minimize the number of hidden nodes subject to the constraint NMSEtrn <= 0.01 . I then rank those nets according to NMSEval and NMSEtst.
Details can be found in my NEWSGROUP and ANSWERS posts.
Greg

Connectez-vous pour commenter.

Plus de réponses (0)

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by