how do i construct neural network
1 vue (au cours des 30 derniers jours)
Afficher commentaires plus anciens
Thirunavukkarasu
le 26 Août 2014
Commenté : Greg Heath
le 30 Août 2014
how do i construct neural network that has two layers, four weights linking the input-to-hidden layer with no biases, and two weights linking the hidden-to-output layer with a 1 bias at the output neuron
0 commentaires
Réponse acceptée
Greg Heath
le 28 Août 2014
What you are asking doesn't make much sense. For a standard universal approximation I-H-O net the number of weights are
Nw = (I+1)*H+(H+1)*O
where the 1s correspond to biases. If either bias node is removed, the net is no longer a universal approximator.
It looks like you want
Nw = I*H+(H+1)*O
with
I*H = 4
H*O = 2
O = 1
Consequently, O=1, H=2, I = 2 and
size(input) = [ 2 N ]
size(target) = [ 1 N ]
Since you don't specify regregression or classification, lets try classification with the exclusive or function. Typically, I desire the explained target variance = coefficient of variation, = Rsquared (See Wikipeia) to be >= 0.99.
clear all, clc
x = [1 1 -1 -1 ; -1 1 1 -1 ];
t = [ 0 1 0 1 ];
MSE00 = var(t) % 0.33333 Reference MSE
net = patternnet(2);
net.biasConnect = [ 0;1]; % No input bias
net.divideFcn = 'dividetrain'; % No validation or test subsets
rng(0)
for i = 1:10
net = configure(net,x,t);
[net tr y(i,:) e] = train(net,x,t);
R2(i,:) = 1-mse(e)/MSE00;
end
y = y % y = 0.5*ones(10,4)
R2 = R2 % R2 = 0.25*ones(10,1), (far from 0.99!!!)
Obviously, can get negligible error with the input bias.
Hope this helps.
Thank you for formally accepting my answer
Greg
2 commentaires
Greg Heath
le 30 Août 2014
You have designated 3 node layers which makes intuitive sense.
HOWEVER, THE CONVENTION IS TO DESIGNATE WEIGHT LAYERS OF WHICH YOU HAVE 2:
A hidden weight layer and an output weight layer.
Therefore, this is considered a 2-layer net!
I don't like this convention. However, it is what it is.
The nodes you have labeled as being in the Input Layer are considered FAN-IN-UNITS. Whereas the nodes in the other two layers are considered NEURONS.
Nevertheless, without a bias weight for the first weight layer (as I have illustrated with the XOR) you do not have a universal approximator.
Do you have any questions?
Greg
Plus de réponses (0)
Voir également
Catégories
En savoir plus sur Define Shallow Neural Network Architectures dans Help Center et File Exchange
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!