Convolutional neural network toolbox
2 vues (au cours des 30 derniers jours)
Afficher commentaires plus anciens
Hi, I use convolutional neural network toolbox.This is my code:
network1WB(1).Weights = (randn([5 5 1 1]) * 0.01);
network1WB(1).Bias = (randn([1 1 1])*0.01);
network1WB(2).Weights = (randn([5 5 1 20]) * 0.01);
network1WB(2).Bias = (randn([1 1 20])*0.01);
network1WB(3).Weights = (randn([40 320]) * 0.01);
network1WB(3).Bias = (randn([40 1])*0.01);
network1WB(4).Weights =( randn([150 40]) * 0.01);
network1WB(4).Bias = (randn([150 1])*0.01);
network1WB(5).Weights =( randn([10 150]) * 0.01);
network1WB(5).Bias = (randn([10 1])*0.01);
layers = [imageInputLayer([28 28 1])
convolution2dLayer(5,1,'Stride',1)
reluLayer
maxPooling2dLayer(2,'Stride',2)
convolution2dLayer(5,20,'Stride',1)
reluLayer
maxPooling2dLayer(2,'Stride',2)
fullyConnectedLayer(40)
fullyConnectedLayer(150)
fullyConnectedLayer(10)
softmaxLayer
classificationLayer()];
layers(2).Bias=network1WB(1).Bias;
layers(2).Weights=network1WB(1).Weights;
layers(5).Bias=network1WB(2).Bias;
layers(5).Weights=network1WB(2).Weights;
layers(8).Bias=network1WB(3).Bias;
layers(8).Weights=network1WB(3).Weights;
layers(9).Bias=network1WB(4).Bias;
layers(9).Weights=network1WB(4).Weights;
layers(10).Bias=network1WB(5).Bias;
layers(10).Weights=network1WB(5).Weights;
options = trainingOptions('sgdm','ExecutionEnvironment','gpu',...
'Shuffle','never',...
'CheckpointPath','.\Model1',...
'L2Regularization',reg,...
'InitialLearnRate',0.01,...
'LearnRateSchedule','piecewise',...
'LearnRateDropFactor',0.9993,...
'LearnRateDropPeriod',1,...
'MaxEpochs',epoch, ...
'Momentum',momentum,...
'MiniBatchSize',minibatch);
[convnet,traininfo] = trainNetwork(imtr,categorical(labelstra),layers,options);
where imtr are training set composed by images and labelstra is labels.If I run the code for two times with the same weights and the same training set ,the convolutional neural network obtain different result.Is possible?Or there are something wrong?
0 commentaires
Réponses (3)
Javier Pinzón
le 16 Nov 2017
Hello Luca,
As far as I know, and with some test that I have performed before, if two trainings have the same initial weights, the ConvNet may not behaves in the same way, however, the behavior should converge in a similar way.
In the other hand, when I test the two trained networks, with a validation dataset, one gave me the epoch 120 as the best with one, with the another, the epoch 210, and the "training accuracy" has very similar behavior.
It may occurs because the network, in any time, may star to learn some different small features.
I hope this small explanation helps.
Regards,
Javier
0 commentaires
Greg Heath
le 16 Nov 2017
Modifié(e) : Greg Heath
le 20 Nov 2017
As alluded to above:
You will only get duplicate results if the RNG is initialized to the same initial state!
In particular, to repeat the result
You have to RESET the RANDOM NUMBER GENERATOR to THE SAME initial STATE
For details, read
From browser:
help rng
doc rng
From website:
https://www.mathworks.com/help/matlab/ref/rng.html
Hope this helps.
Thank you for formally accepting my answer
Greg
2 commentaires
Salma Hassan
le 20 Nov 2017
plz sir i have the same problem can you explain this in more simple details...thank
Steven Lord
le 20 Nov 2017
Call rng before calling rand, randn, randi, or another random number function to initialize the weights.
Voir également
Catégories
En savoir plus sur Sequence and Numeric Feature Data Workflows dans Help Center et File Exchange
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!