Neural network training fails when target values are small. Mapminmax issue?
10 vues (au cours des 30 derniers jours)
Afficher commentaires plus anciens
When I try to train a network with very small targets the training stops at epoch 0 (i.e., does not begin at all) because the gradient is already too small. I understand that a very small target could imply a very small gradient but the mapminmax function is active and it should map the target in [-1,1] avoiding this kind of problems. So what's going on?
Here's some code:
First I define a really small sine wave:
in = [0:0.1:10];
out = sin(in)/1e10;
then I create and configure a network
net = fitnet([15]);
net = configure(net,in,out);
The mapminmax function seems to be active and properly configured:
net.outputs{1,2}.processSettings{1,2}
ans =
name: 'mapminmax'
xrows: 1
xmax: 9.9957e-11
xmin: -9.9992e-11
xrange: 1.9995e-10
yrows: 1
ymax: 1
ymin: -1
yrange: 2
no_change: 0
gain: 1.0003e+10
xoffset: -9.9992e-11
but the training fails (it stops at epoch 0):
[net,tr] = train(net,in,out);
tr.stop
ans =
Minimum gradient reached.
tr.num_epochs
ans =
0
The learning completely failed, this is the output of the net:
But if I manually use mapminmax everything works well
net = configure(net,in,mapminmax(out,-1,1));
[net,tr] = train(net,in,mapminmax(out,-1,1));
tr.stop
ans =
Minimum gradient reached.
tr.num_epochs
ans =
377
And the network actually learned the sine function:
Any ideas?
3 commentaires
Réponse acceptée
Greg Heath
le 1 Sep 2016
You have to change the defaults for BOTH the MSE goal AND the Minimum gradient. They are on the scale of the UNNORMALIZED data. For simple problems I tend to use the average BIASED target variance estimate to get
MSEgoal = mean(var(target',1))/100
MinGrad = MSEgoal/100
On more serious problems I consider BOTH the UNBIASED mean target variance estimate for O-dimensional targets AND the loss of degree of freedom because the same data is used to BOTH estimate Nw unknown weights AND to estimate the performance:
MSEgoal =
0.01*max(0,Ndof)* mean(var(target',0)) / Ntrneq
where
Ntrneq = Ntrn*O % No of training equations
Ndof = Ntrneq - Nw % No of degrees of freedom
For details search both the NEWSGROUP and ANSWERS using
greg MSEgoal MinGrad
Hope this helps
Thank you for formally accepting my answer
Greg
4 commentaires
Greg Heath
le 2 Sep 2016
You are right.
THIS IS A BUG.
I alerted MATLAB before and whatever fixed value they had for MSEgoal was changed to 0. I don't recall if they changed MinGrad or not.
Regardless, the use of my own MSEgoal and MinGrad was prompted because of the dissatisfaction with the MATLAB defaults.
By the way, if you are using NARNET or NARXNET the values should probably be scaled with 0.005 or 0.001 instead of 0.01 because closing the loop requires that openloop performance be exceptionally good.
Greg
Plus de réponses (0)
Voir également
Catégories
En savoir plus sur Sequence and Numeric Feature Data Workflows dans Help Center et File Exchange
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!