Neural Network: how can I get the correct output answer without using the function "sim", neural network function "sim" vs my calculation with trained network's weight and bias

I want to calculate the Neural network output with weight produced by neural network toolbox. but my caclulated output is different from the sim(net,X)
1. I made input data and target data
M = [1:1:10];
M = [M,M,M,M,M].*rand();
M = [M,M].*rand();
M = [M,M,M,M,M].*10;
M = [M,M].*10;
a=M.*rand().*2^rand()+5*rand()-5*rand();
b=M.*rand().*2^rand()+5*rand()-5*rand();
c=M.*rand().*2^rand()+5*rand()-5*rand();
n=rand(1,1000)*0.05;
y = 5*a + b.*c + 7*c + n;
x=[a; b; c];
t=y;
and set the FF Neural network
hiddenLayerSize = 4;
net = feedforwardnet(hiddenLayerSize);
net.divideFcn = 'dividerand'; % Split random data
net.divideMode = 'sample';
net.divideParam.trainRatio = 70/100;
net.divideParam.valRatio = 15/100;
net.divideParam.testRatio = 15/100;
net = train(net,x,t);
And output of trained network with intput data X = [22,25,21]' is
X = [22,25,21]'
y_sim=sim(net,X)
This procedure is the result using the function "sim".
Next, i will calculate output with above network's weight parameters.
weight parameter like as,
b1 = net.b{1};
b2 = net.b{2};
IW = net.IW{1,1};
LW = net.LW{2,1};
and calculate the output with input X = [22,25,21]'
X = [22,25,21]'
y_my = b2 + LW * tanh(b1 + (IW * X))
I really don't know why these two output is different. y_my and y_sim is different.
this is full codes.
clc
clear all
rng(4151945);
M = [1:1:10];
M = [M,M,M,M,M].*rand();
M = [M,M].*rand();
M = [M,M,M,M,M].*10;
M = [M,M].*10;
a=M.*rand().*2^rand()+5*rand()-5*rand();
b=M.*rand().*2^rand()+5*rand()-5*rand();
c=M.*rand().*2^rand()+5*rand()-5*rand();
n=rand(1,1000)*0.05;
y = 5*a + b.*c + 7*c + n;
x=[a; b; c];
t=y;
% Setting the sample size
hiddenLayerSize = 4;
net = feedforwardnet(hiddenLayerSize);
net.divideFcn = 'dividerand'; % Split random data
net.divideMode = 'sample';
net.divideParam.trainRatio = 70/100;
net.divideParam.valRatio = 15/100;
net.divideParam.testRatio = 15/100;
net = train(net,x,t);
% syms p q r real
% X = [p,q,r]';
X = [22,25,21]'
b1 = net.b{1};
b2 = net.b{2};
IW = net.IW{1,1};
LW = net.LW{2,1};
y_my = b2 + LW * tanh(b1 + (IW * X))
y_sim = sim(net,X)
y1compare = 5*X(1) + X(2)*X(3) + 7*X(3)
Is there a calculation process in the function "sim" I do not know? What do i miss? Please let me know.
2. This quastion is different from above. I thought the output would be more accurate if the number of neurons in hidden layer was large. But in my case, the more hidden layers, the worse the performance. How to find the appropriate number of hidden layers? please give me some tips.
Thanks.

 Réponse acceptée

There is a scaling of the data which happens for the inputs and outputs which is not being considered in the above example. All of the inputs are first mapped to the range [-1 1] and so are the targets.
These mappings are held in the following locations:
net.inputs{1}
net.outputs{2}
If you look at these there is a mapminmax function being applied to the data as described here .
If you wish to take this into consideration you need to apply the mapminmax function yourself using the processSettings stored in the inputs/outputs.
X1 = mapminmax('apply',X,net.inputs{1}.processSettings{1})
y_my = purelin(b2 + LW * tansig(b1 + (IW * X1)))
yMY = mapminmax('reverse',y_my,net.outputs{2}.processSettings{1})

5 commentaires

thanks Brendan, but there is a additional question.
1. I want to calculate without mapping, so is there a method to create a neural network without normalizing the input? I mean that method of not using notnetwork's processFcns 'mapminmax'.
I just want to calculate the output this way, without using normalize inputs and the function mapminmax.
X = [22,25,21]'
y_my = b2 + LW * tanh(b1 + (IW * X))
2. This quastion is different from above. I thought the output would be more accurate if the number of neurons in hidden layer was large. But in my case, the more hidden layers, the worse the performance. How to find the appropriate number of hidden layers? please give me some tips. thanks.
1. If would not advise removing this function, but if you have some really good reason for this, then you can do the following (but see part 2 as well):
net.outputs{2}.processFcns(1) = [];
net.inputs{1}.processFcns(2) = [];
2. It is not true that more hidden layers perform better in general. For most applications 1 hidden layer is perfectly acceptable. Similarly the number of neurons is another parameter one may wish to change. In general I would advise using the average of the number of inputs and outputs from the network. Again this is a generalization and there are no hard rules regarding this. If you are unsure then: a) you should probably not be removing the input and output functions, and b) you can train multiple different modes and compare them for effectiveness. In general I would always recommend both a) and b) here.
Question 1. Thanks Brendan. The problem is completely resolved.
Question 2. I was wrong about the question. I just modified the my question before. I am so sorry. My question is the not the number of hidden layers but the number of neurons in a hidden layer. I'm really sorry about that.
What i mean is, output would be more accurate if the number of neurons in hidden layer was large. Is it true?
Really thanks.
It is not generally true that having more neurons in a hidden layer will produce a more accurate model.
The most common recommendation is to have a number of neurons somewhere between the number of inputs and number of outputs. There are others who may may have different guidelines for selecting the number of neurons. I have also seen suggestions of no more than 2x the number of inputs.
The above are of course guidelines. You can of course try several models within that range and compare them for predictive performance. I would recommend this as the number of neurons which is best for your specific model is unknown. Luckily it is fast to train so you can start at 2 neurons and ratchet it up 1 extra neuron at a time to 6 neurons and choose the model with the best results.
I like your advice. It was helpful for me too. Thank you Mr Brendan

Connectez-vous pour commenter.

Plus de réponses (0)

Catégories

En savoir plus sur Deep Learning Toolbox dans Centre d'aide et File Exchange

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by