Why does training my network in batches give different results than training in a single step?

2 vues (au cours des 30 derniers jours)
I would like to know why training my network in batches gives different results than training in a single step.
If I train my Neural Netwok 10 times for 10 epochs, I get a different result than if I had trained it once for a hundred epochs.

Réponse acceptée

MathWorks Support Team
MathWorks Support Team le 27 Juin 2009
When training a neural network, each input vector is presented to the network at least once. Presenting all vectors once is called an epoch.
Most of the training algorithms have parameters that are adjusted adaptively during training. When you stop and restart training, these parameters revert to their initial values. This means that training the network 10 times for 10 epochs will give you a result similar to if you had trained the network just once for 10 epochs. This is why training a network 10 times for 10 epochs is not equivalent to training 1 time for 100 epochs.
  3 commentaires
Greg Heath
Greg Heath le 8 Avr 2018
According to the previous comment, only mu reverts to the initial value.
Greg Heath
Greg Heath le 9 Avr 2018
foo???
My answer was based on comparing the results of a simple comparison test.
Try it.
Greg

Connectez-vous pour commenter.

Plus de réponses (0)

Catégories

En savoir plus sur Deep Learning Toolbox dans Help Center et File Exchange

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by