need help "backpropagation algorithm with more than one hidden layer"
20 vues (au cours des 30 derniers jours)
Afficher commentaires plus anciens
i have implemented backpropagation network for one hidden layer, but now my guide told me to use more than one hidden layer. can any one tell me: # what is the method for distribution of neurons in hidden layers(>1)? # following initialization weight and bias matrices are right for hidden layers=2? v=randn(n,x); w=randn(x,m); wo=randn(1,m); vo=randn(1,x); v1=randn(x,(p-x)); %2nd hidden layer vo1=randn(1,(p-x)); %2nd hidden layer vin=v; win=w; woin=wo; voin=vo; v1in=v1; %2nd hidden layer vo1in=vo1; %2nd hidden layer 3) whether the following code is correct for 2 hidden layers? p-hidden layer neurons x=round(p/2)%2 is no. of hidden layers %%%First hidden layer zin=zeros(trn_dat,x); z = zeros(trn_dat,x); din = zeros(trn_dat,x); d = zeros(trn_dat,x); delv = zeros(n,x); delvo = zeros(1,x); %%%Second hidden layer zin1=zeros(x,p-x); z1=zeros(x,p-x); din1 = zeros(x,p-x); d1 = zeros(x,p-x); delv1 = zeros(x,p-x); delvo1 = zeros(1,p-x); %%%Output Layer yin = zeros(trn_dat,m); y = zeros(trn_dat,m); dk = zeros(trn_dat,m); delw = zeros(p-x,m); delwo = zeros(1,m); er=0; itr=1; max_itr=1000; while er==0 disp('epoch no is'); disp(itr); for T=1:trn_dat %total no. of training records %%feed forward for j=1:x zin(T,j)=0; for i=1:n zin(T,j)=zin(T,j)+(dt(T,i)*v(i,j)); end zin(T,j)=zin(T,j)+vo(j); z(T,j)=(2/(1+exp(-zin(T,j))))-1; end for j=(p-x) zin1(j)=0; for i=1:x zin1(j)=zin1(j)+(z(T,i)*v1(i,j)); end zin1(j)=zin1(j)+vo1(j); z1(j)=(2/(1+exp(-zin1(j))))-1; end for k=1:m yin(T,k)=0; for j=1:(p-x) yin(T,k)=yin(T,k)+(z1(j)*w(j,k)); end yin(T,k)=yin(T,k)+w(k); y(T,k)=(2/(1+exp(-yin(T,k))))-1; end %%back propagation of error for k=1:m dk(T,k)=0; dk(T,k)=((tar(T,k)-y(T,k))*((1/2)*(1+y(T,k))*(1-y(T,k)))); for j=1:(p-x) delw(j,k)=(alpha*dk(T,k)*z1(j)); end delwo(k)=(alpha*dk(T,k)); end for j=1:(p-x) din1(j)=0; for k=1:m din1(j)=din1(j)+((dk(T,k)*w(j,k))); end d1(j)=0; d1(j)=(din1(j)*((1/2)*(1+z1(j))*(1-z1(j))));
for i=1:x
delv1(i,j)=(alpha*d1(j)*z(T,j));
end
delvo1(j)=(alpha*d1(j));
for i=1:n
delv(i,j)=alpha*d(T,j)*dt(T,i);
end
delvo(j)=alpha*d(T,j);
end
%%updation of weights and biases
for k=1:m
for j=1:(p-x)
w(j,k)=w(j,k)+delw(j,k);
end
wo(k)=wo(k)+delwo(k);
end
for j=1:(p-x)
for i=1:x
v1(i,j)=v1(i,j)+delv1(i,j);
end
vo1(j)=vo1(j)+delvo1(j);
end
for j=1:x
for i=1:n
v(i,j)=v(i,j)+delv(i,j);
end
vo(j)=vo(j)+delvo(j);
end
end
disp('value of y at this iteration:');
disp(y);
% error=0;
e=tt-y;
error=sqrt(e.^2);
if (max(max(error))<0.05)
er=1;
else
er=0;
end
itr=itr+1;
if(itr>max_itr)
break;
end
end
thank you
Punam
1 commentaire
Oleg Komarov
le 17 Mar 2012
Please format your question properly and retain only the essential code: http://www.mathworks.com/matlabcentral/answers/13205-tutorial-how-to-format-your-question-with-markup
Réponses (0)
Voir également
Catégories
En savoir plus sur Define Shallow Neural Network Architectures dans Help Center et File Exchange
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!