Is it normal that one neuron in the hidden layer is sufficient for a neural network with 56 inputs and one output?
1 vue (au cours des 30 derniers jours)
Afficher commentaires plus anciens
I am training a neural network with the following specifications:
Number of inputs: 56
Number of outputs: 1
Number of samples in training data: 2000
Training function: trainbr
Training goal: 1e4
As I see, when I start with one neuron in the hidden layer, the neural network works well even for the data which was not used for training. When I start increasing the number of neurons to 5, 10, etc.. and also number of hidden layers like [5 5], [5 5 5] and so on. The result does not improve. Infact, the neural network tries to overfit if I do not restrict my goal to 1e4. By overfitting, I mean that the training curve and the testing curve starting moving apart from each other around mse of 1e4. I have read a lot online where some sources say, that having just one neuron in the hidden layer might be dangerous when the network is implemented as a generalized model. Does it mean that the network may not function well when exposed to more and more data? Any inputs on this will be appreciated. Thanks.
0 commentaires
Réponse acceptée
Greg Heath
le 17 Nov 2018
If your training data is sufficient, the most stable designs occur when the training goal is satisfied wit h a minimum number of hidden nodes.
Hope this helps
Thank you for formally accepting my answer
Greg
0 commentaires
Plus de réponses (0)
Voir également
Catégories
En savoir plus sur Sequence and Numeric Feature Data Workflows dans Help Center et File Exchange
Produits
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!