Custom Neural Netwrok (Manually re-implementing "patternnet' using "network")
5 vues (au cours des 30 derniers jours)
Afficher commentaires plus anciens
Hello everyone,
I want to eventually build a complex-structured custom network. I thought maybe I should started with the simplest one so I tried building a 2 layer feedforward network (without using the app) but manually using 'network' function with the code given bellow:
net = network(1,2,[1;1],[1;0],[0,0;1,0],[0,1]);
net.layers{1}.transferFcn = 'tansig';
net.layers{2}.transferFcn = 'softmax';
net.inputWeights{1}.initFcn = 'initzero'; % set input wieghts init function
net.inputWeights{1}.learnFcn = 'learnp';
net.inputWeights{2}.initFcn = 'initzero'; % set input wieghts init function
net.inputWeights{2}.learnFcn = 'learnp';
net.layers{1}.size=50;
net.initFcn = 'initlay';
net.trainFcn = 'trainscg';
net.performFcn = 'crossentropy';
net.divideFcn = 'dividerand'; % Divide data randomly
net.divideMode = 'sample'; % Divide up every sample
net.divideParam.trainRatio = 75/100;
net.divideParam.valRatio = 20/100;
net.divideParam.testRatio = 5/100;
net.plotFcns = {'plotperform','plottrainstate','ploterrhist', ...
'plotconfusion', 'plotroc'};
net = configure(net,Samples,Targets);
net = init(net);
[net,tr] = train(net, Samples, Targets);

The network looks the exact same as the one with 'patternnet', and I tried to define all the parameters the same way. However when I train the same dataset I get two completely different results. The training results with the app and 'patternnet' make much more sense than the training with my own built network!! My network stops after 2 iterations with poor performance, the one using 'patternnet' goes beyond 200 iterations with fair performance.
Is there anything I am missing here, or is there any justification for this?
Thank you so much in advance!
0 commentaires
Réponses (3)
Greg Heath
le 28 Avr 2017
H = 50 is excessive and probably causes overfitting/training problems.
When starting something new, ALWAYS BEGIN with as many default values as possible and MATLAB help/doc documentation datasets.
Then after experimenting with parameter value changes, use the method on your own data and experiment with changing parameter settings.
Changing 1 parameter at a time tends to be rather foolproof but may be extremely tedious (especially if you have a large dataset). So you may want to reduce the size of your data set to a smaller one that is still representative of the whole set. A decent rule of thumb is to have 10 to 30 examples per input dimension
Hope this helps,
Thank you for formally accepting my answer
Greg
Greg Heath
le 28 Avr 2017
Modifié(e) : Greg Heath
le 28 Avr 2017
1. Initialize the RNG to the same initial state.
2. Transform inputs and targets to [-1 1 ] before learning and transform the output back using the target inverse transform after learning.
Hope this helps.
Thank you for formally accepting my answer
Greg
0 commentaires
Voir également
Catégories
En savoir plus sur Deep Learning Toolbox dans Help Center et File Exchange
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!