Why I have the same neural network results with different weight and bias initiation each time?
20 vues (au cours des 30 derniers jours)
Afficher commentaires plus anciens
Hello, anyone can help?
I am using neural network for prediction purpose, and the network frame is 1 input layer and 1 output layer, I made the hidden layer vary with different hidden neurons set (Hmax=20). And for each H, I set the Ntrials to 10, and in each trial, the network was initiated.
However, I am getting the same results (same weight and bias) for Ntrials=1:10.
Thanks! -Cathy
Here are some code:
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
Hmax=20;
Ntrial=10;
for i=1:Hmax
for j=1:Ntrials
% Create a Fitting Network
hiddenLayerSize = i; % number of hidden neurons in current NN structure
net = fitnet(hiddenLayerSize);
% Choose Input and Output Pre/Post-Processing Functions
% For a list of all processing functions type: help nnprocess
net.inputs{1}.processFcns = {'removeconstantrows'};
net.outputs{2}.processFcns = {'removeconstantrows'};
% Setup Division of Data for Training, Validation, Testing
% For a list of all data division functions type: help nndivide
net.divideFcn = 'divideind'; % Divide data by index
net.divideParam.trainInd = trnind;
net.divideParam.testInd = tstind;
% set the transfer function (activation function) for input and
% output layers
net.layers{1}.transferFcn = 'tansig'; % layer 1 corresponds to the hidden layer
net.layers{2}.transferFcn = 'purelin'; % layer 2 corresponds to the output layer
net.layers{1}.initFcn='initwb';
net.trainFcn = 'trainbr'; % Levenberg-Marquardt optimization with Bayesian regularization
net.trainParam.goal=0.01.*var(targets); % usually set this to be 1% of the var(target)
%net.trainParam.goal=0;
net.trainParam.epochs=1000;
net.trainParam.mu_dec=0.8;
net.trainParam.mu_inc=1.5;
% Choose a Performance Function
% For a list of all performance functions type: help nnperformance
net.performFcn = 'msereg'; % Mean squared error
% Choose Plot Functions
% For a list of all plot functions type: help nnplot
net.plotFcns = {'plotperform','plottrainstate','ploterrhist', ...
'plotregression', 'plotfit'};
% Train the Network
net=configure(net,inputs,targets);
[net,tr] = train(net,inputs,targets);
% Test the Network
outputs = net(inputs);
errors = gsubtract(targets,outputs);
performance = perform(net,targets,outputs);
% Recalculate Training and Test Performance
trainTargets = targets .* tr.trainMask{1};
testTargets = targets .* tr.testMask{1};
trainPerformance = perform(net,trainTargets,outputs);
testPerformance = perform(net,testTargets,outputs);
% Calculate the R2 Training and Test
H=hiddenLayerSize;
[I,N] = size(inputs); % [8 2519]
[O,N] = size(targets); % [1 2519]
Ntrn = N*0.70;
Ntst = N*0.30;
Ntrneq = Ntrn*1;
%denormalize the output
outputs_real=outputs.*std(sal)+mean(sal);
w1=net.IW{1,1};
w2=net.LW{2,1};
b1=net.b{1};
b2=net.b{2};
end
end
0 commentaires
Réponses (1)
Greg Heath
le 4 Juin 2017
net.divideParam.trainInd = trnind;
net.divideParam.testInd = tstind;
GEH1: TRNIND & TSTIND ARE NEVER DEFINED. EVEN SO, IF THEY WERE DEFINED, THEY WOULD BE THE SAME FOR EVERY LOOP
Hope this helps,
Thank you for formally accepting my answer
Greg
0 commentaires
Voir également
Catégories
En savoir plus sur Sequence and Numeric Feature Data Workflows dans Help Center et File Exchange
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!