When use "ones" initialization in DeepLearning?
1 vue (au cours des 30 derniers jours)
Afficher commentaires plus anciens
Claudia Borredon
le 21 Avr 2021
Réponse apportée : Divya Gaddipati
le 11 Mai 2021
Hello everyone
I wanted to know when (or if) it is useful to use the "ones" weights initialization option in the fullyConnectedLayer or if it should be avoided.
Moreover what is the point of that kind of initialization, is it usually used as a reference or is it actively used sometimes?
Thank you in advance.
0 commentaires
Réponse acceptée
Divya Gaddipati
le 11 Mai 2021
Intuitively, with a constant weight initialization, all the layer outputs during the initial forward pass of a network are essentially the same and this makes it very hard for the network to figure out which weights to be updated. And, so any constant initialization would produce a poor result and so better to avoid using it.
Having the weights initialized with values sampled from a random distribution instead of constant values like zeros and ones actually helps the network to train better and faster. Moreover, neural networks being very sensitive and prone to overfitting, having random weight initialization actually prevents the neurons from learning the same features. Also, this imposed randomness is highly suitable for gradient-based optimization techniques and helps a network to better guide which weights to update. Hence, random weight initialization is more actively used.
0 commentaires
Plus de réponses (0)
Voir également
Catégories
En savoir plus sur Deep Learning Toolbox dans Help Center et File Exchange
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!