Why I get different results by using 100 epochs x 1 (only NN toolbox, no loop) and 1 epochs x 100 (NN toolbox and for loop) in Matlab Neural Network training?
4 vues (au cours des 30 derniers jours)
Afficher commentaires plus anciens
Hi, I am working on a neural network for pattern recognition. I am new in this area and I started with MNIST dataset. By using the NN toolbox provided by Matlab, I have successfully made a script (A) which give me good results. It took more than 100 epochs. I then tried to separate this process into 100 1-epoch cycles (B), using a for loop to do the cycling. However, I found that the results from A and B are quite different. B cannot produce a good network. I searched this site and tried different solutions: 1. I set rng('default') before the training in A and outside/inside loop in B. This makes each trial of B produce the same result, but the result is still different from A. 2. I looked at the weight recorded in A and B. In the first epoch the weights are the same, but differed starting from the 2nd epoch. 3. I set the same net.dividefcn for both of A and B, but the problem still exists. 4. I set the same net = init(net); for both of A and B, but the problem still exists.
I put my code as below, could you please help me, to reproduce the same result in A and B.
Thanks in advance.
I use 2 separate scripts, but merge them together here with the different part labelled out, for the convenience of comparison.
% code starts
clear all; close all;
images = loadMNISTImages('train-images.idx3-ubyte'); labels = loadMNISTLabels('train-labels.idx1-ubyte'); labels = labels';
figure colormap(gray)
for i = 1:36 subplot(6,6,i) digit = reshape(images(:, i), [28,28]); imagesc(digit) title(num2str(labels(i))) end
labels(labels==0)=10; labels=dummyvar(labels);
colormap(gray);
x = images; t = labels';
% The difference in A and B starts here:
% A
hiddenSize = 100; trainFcn='trainscg'; performFcn='crossentropy'; net = patternnet(hiddenSize); net.dividefcn = 'divideint';
net.trainParam.epochs=100; net.divideParam.trainRatio = 70/100; net.divideParam.valRatio = 15/100; net.divideParam.testRatio = 15/100;
rng('default'); net = init(net); [net,tr] = train(net,x,t);
% A's code ends here
% and the part in B:
hiddenSize = 100; trainFcn='trainscg'; performFcn='crossentropy'; net = patternnet(hiddenSize); net.dividefcn = 'divideint';
net.trainParam.epochs=1; net.divideParam.trainRatio = 70/100; net.divideParam.valRatio = 15/100; net.divideParam.testRatio = 15/100;
rng('default'); net = init(net);
for j=1:100
[net,tr] = train(net,x,t);
end
% B's code ends here
0 commentaires
Réponse acceptée
Greg Heath
le 22 Déc 2017
Every time you call train, certain parameters are reinitialized.
Therefore 100 one epoch calls cannot be the same as one 100 epoch calls.
This is easily confirmed by reducing the 100 to 3 and printing out the results of all commands by removing the ending semicolons.
Hope this helps.
Thank you for formally accepting my answer
Greg
Plus de réponses (0)
Voir également
Catégories
En savoir plus sur Sequence and Numeric Feature Data Workflows dans Help Center et File Exchange
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!