Classification problem in neural network code from scratch

3 vues (au cours des 30 derniers jours)
Odrisso
Odrisso le 8 Juil 2016
Commenté : Odrisso le 8 Juil 2016
I have developed a code for ANN BP to classify snore segments. I have 10 input features and 1 hidden layer with 10 neuron and one output neuron. I denoted 1 as no snore and 0 as snore segment. I have 3000 segments and among them 2500 are no snore segments which are marked as 1. and 500 snore segments which are marked as 0. I already divided the data set in three sets (70% training, 15% validation and 15% testing). I also used the bias.
Now, while training the network, first I shuffled the training set and mixed the snore and no snore segments all together. So, After I trained the network, when I validate it (by only feed forward network), I found that it can only classify one of them. Let me clear it further, suppose, in the training set the last element is no snore (which is 1). So, it trained the network for that last output. Then in the validation phase, it always give output close to 1 even for snore segments (which is 0). Same thing happen if the last element is snore (0). Then it gives output close to 0 all the time in validation phase. Actually,The problem is in memorizing the previous weights for one label (suppose 0). Then when the other label suppose (1) come in the network, it forgets the weights for previous elements. I tried several hidden layer, 2 output layers. But, the problem remain same.
How can I solve this problem? Why Can't my network did not memorize the output for previous segments. It only saves for the last segment? What should I change in the network to solve it?

Réponses (1)

Greg Heath
Greg Heath le 8 Juil 2016
Modifié(e) : Greg Heath le 8 Juil 2016
1. If you are using PATTERNNET, the targets should
be either [ 1; 0 ] or [ 0;1 ]
2. 2500/500 is too imbalanced.
There are several remedies.
The simplest solution is to make 4 copies of the
smallest class so the ratio is 2500/2500
Thank you for formally accepting my answer
Greg
PS For debugging make life easy and use a 500/500 random sample of the 2500/2500
  1 commentaire
Odrisso
Odrisso le 8 Juil 2016
Hi, The problem is not in the class. The problem is in the weight of the neural network. As I said, if the last element of the output is 0, then it can classify all the 0 and give error for all 1. And if the last element is 1, then it can classify all 1 and can not classify the zeros. The problem is in weight updating. Here I am giving u my code for your understanding.
x = xshuffled;
y = yshuffled;
m = length(y);
b1 = ones(m, 1);
[l,b] = size(x);
[n,o] = size(y);
numFeatures = size(x,2);
alpha = 1;% Learning Rate
g = inline('1.0 ./ (1.0 + exp(-z))');
% a = zeros (numFeatures,1);
Maxitr = 50; % Number of iterations
numofhiddenlayer = 1;
numofhiddenneuroninhl1=10;
numofhiddenneuroninhl2=3;
numofoutputneuron = 1;
%Initialize the weights
for i=1:numofhiddenlayer+1
if i==1
hll1=numofhiddenneuroninhl1;
V=rand(numFeatures,hll1);
VB=rand (hll1,1);
W = rand(hll1,numofoutputneuron);
WB = rand(numofoutputneuron,1);
end
end
for ii = 1:m
[pp,qq] = size(V);
d1 = 0;
for iii = 1:Maxitr
V = V;
VB = VB;
W = W;
WB = WB;
% Forward Propagation
% Layer 1
for jk=1:numofhiddenneuroninhl1
z (jk) = x(ii,:)*V(:,jk)+VB(jk);
a (jk) = g(z(jk));
end
% Layer 2
for ik=1:numofoutputneuron
% zout(ii) = W (1,ii)*a(1)+W(2,ii)*a(2)+WB;
zout(ik) = a * W (:,ik)+WB(ik);
aout (ik) = g(zout(ik));
end
% Back Propagation
errorValue (ii,iii) = (1/2)*((y(ii) - aout).^2);
d1 = (aout-y(ii))*aout*(1-aout);
W1 = W';
% Update of Layer 1 Weights
for jj = 1:pp
for jjj = 1:qq
V(jj,jjj)= V(jj,jjj)- (alpha*d1*W1(1,jjj)*a(jjj)*(1-a(jjj))*x(ii,jj));
end
end
% Update of Layer 1 Bias Weights
for ij=1:size (VB)
VB (ij) = VB(ij) - (alpha*d1*W1(1,ij)*a(ij)*(1-a(ij)));
end
% Update of Layer 2 Weights
for j = 1:size(W)
W(j)= W(j)- (alpha*d1*a(j));
end
% Update of Layer 2 Bias Weights
WB = WB - (alpha*d1);
end
end
%%validation
xval =xtest;
yval = ytest;
m1= length (yval);
Vval = V;
VBval = VB;
Wval = W;
WBval = WB;
for ii = 1:m1
[pp,qq] = size(Vval);
[rr,zz] = size(Wval);
for jk=1:numofhiddenneuroninhl1
zval (jk) = xval(ii,:)*Vval(:,jk)+VBval(jk);
aval (jk) = g(zval(jk));
end
% Layer 2
zoutval= aval * Wval (:,ik)+WBval(ik);
aoutval (ii) = g(zoutval);
end
Yscore = aoutval';
Ypred = round (Yscore);
%%model accuracy
corrc = sum (Ypred == ytest)/length (ytest) * 100;
fprintf ('Preiction Accuracy: %0.2f%%\n',corrc);

Connectez-vous pour commenter.

Catégories

En savoir plus sur Sequence and Numeric Feature Data Workflows dans Help Center et File Exchange

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by