Why the gradient descent algorithm increases in each epoch?
Afficher commentaires plus anciens
Hy everyone
I am training a neural network using the algorithm gradient descendent with momentum, I have been used different combinations of Learning rate and momentum, but the gradient is increasing, why?
This is the structure of my program:
- inputDelays = (1:2);
- hiddenSizes = [3 2 2];
- net = narnet(inputDelays, hiddenSizes);
- net.layers{1}.transferFcn = 'mytransfer';
- net.layers{2}.transferFcn = 'mytransfer';
- net.layers{3}.transferFcn = 'mytransfer';
- net.layers{4}.transferFcn = 'purelin';
- net.layers{6}.size=1;
- net.trainFcn = 'traingdm'
- net.divideParam.trainRatio = 0.8;
- net.divideParam.valRatio = 0.1;
- net.divideParam.testRatio = 0.1;
- net.trainParam.epochs= 60000;
- net.trainParam.max_fail = 60;
- net.trainParam.lr = 0.1;
- net.trainParam.mc = 0.9;
- net.trainParam.goal=1e-4;
First I used a logsig activation function and I get a decrement gradient, but when i use a custom activation function to approximate a logsin, i get a increase gradiente
Can someone help me, please.
Réponse acceptée
Plus de réponses (0)
Catégories
En savoir plus sur Define Shallow Neural Network Architectures dans Centre d'aide et File Exchange
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!