Inconsistent test-results with neural network
Afficher commentaires plus anciens
Hello,
I have just used the GUI-tool of the neural network toolbox for fitting. This app automatically divides the input data in training-, validation- and testdatasets. After training, I get the performance of these 3 stages which are all fine (MSE~50). Especially the included test-setting shows good results. -> the network generalised Unfortunately these good results vanish when I test the neural network manuelly. When I load an unused testdata in the trained network, I get really bad performance values(~1000). -> the network does not generalise at all This confuses me, because this 2 test-options, 'included in tool' and 'manually', should bring almost similar performance values!
Can anyone please tell me, why there is such a huge difference in the results? The tool tells me, that the neural network is good, but when I use it, it sucks. Why?
I am grateful for every answer!
Kind regards, Detlef
Réponses (2)
Walter Roberson
le 13 Juil 2015
0 votes
How are you initializing the weights? By default NN initialize the weights randomly.
1 commentaire
Detlef Preis
le 13 Juil 2015
Greg Heath
le 13 Juil 2015
The only time that should happen is when the 2 sets do not appear to come from the same probability distribution.
You don't give enough information. Assuming the data are 1-Dimensional
For subset 1.training 2. validation 3. test1 4. test2:
a. size(subset) =
b. mean(subset) =
c. var(subset) =
d. (mean(test1)-mean(test2))/sqrt(var(test1) + var(test2))
Hope this helps
Greg
2 commentaires
Detlef Preis
le 13 Juil 2015
Greg Heath
le 13 Juil 2015
You didn't answer my questions.
Catégories
En savoir plus sur Deep Learning Toolbox dans Centre d'aide et File Exchange
Produits
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!