How neural network output is calculated ?

For example
function Example_3()
net = feedforwardnet(1);
p = [[2;2] [1;-2] [-2;2] [-1;1]]
t = [1 2 1 2]
net.layers{1}.transferFcn = 'logsig';
net.layers{2}.transferFcn = 'logsig';
net = train(net,p,t);
wb = formwb(net,net.b,net.iw,net.lw);
[b,iw,lw] = separatewb(net,wb);
end
If
iw are -1.60579942154570 and 5.53933429980683
lw is -24.9335783159999
biases are -1.16445538542225 for the hidden neuron
and 7.83599936414935 for the output neuron
I was able able to calculate the correct values when using a perception but not with neural networks for some reason
I calculate the output using logsig(logsig((IW1*Input1)+(IW2*Input2)+bias1)*LW1+bias2)
If I input -1 and 1, how is the output calculated as 1.9994?
Shouldn't the output be between 0 and 1 because of the logsig ?

1 commentaire

Greg Heath
Greg Heath le 6 Fév 2018
When you choose examples, it is worthwhile to 1st consider continuous functions like ones in the help and doc documentation AND include plots of the inputs, targets and targets vs inputs.

Connectez-vous pour commenter.

 Réponse acceptée

Greg Heath
Greg Heath le 6 Fév 2018

0 votes

When you calculate the output of a net you have to take into account that values in the calculations are scaled, by default,
Hope this helps
Thank you for formally accepting my answer
Greg

1 commentaire

AlAwadhi
AlAwadhi le 11 Fév 2018
Modifié(e) : AlAwadhi le 12 Fév 2018
Edit: ignore this, typing this to you made me notice my error. Thank you so much. Hello Greg,
First of all thank you for answering
I already did that. I can get the correct answers if the output is between -1 and 1. If I change the outputs to be between any other numbers, my results are different than matlabs. For example I get 9.5 and matlab gets 8.3. The range is correct but the answers are different.
I have a neural network with 1 hidden layer and it uses tansig as an activation function. The output layer also uses tansig
This is what I did in my excel sheet
1.First I scaled the inputs with
Scaled input = (tanhMax-tanhMin)*(input-inputMin)/(inputMax-inputMin) + tanhMin
2.Each hidden neuron is equal to the
HiddenNeuron = tanh(sum(inputs*inputWeights))
3.Each output neuron is equal to the
OutputNeuron = tanh(sum(hidden*hiddenWeights))
4.I scaled back the outputs to be in the same range as my target range
Final output = (TargetMax-TargetMin)*(output- tanhMin)/(tanhMax-tanhMin) + TargetMin
I'm not sure what I did wrong, but everything is good only if the target range is supposed to be between -1 and 1.
Thank you

Connectez-vous pour commenter.

Plus de réponses (1)

Greg Heath
Greg Heath le 12 Fév 2018

1 vote

If your outputs are constrained to [ 0, 1 ] use SOFTMAX
If your outputs are constrained to [ -1, 1 ] use TANH
Otherwise use LINEAR
Hope this helps.
Thank you or formally accepting my answer
Greg

Catégories

En savoir plus sur Deep Learning Toolbox dans Centre d'aide et File Exchange

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by