Trying to include learning rate and momentum in sgdmupdate function under multiple GPUs

1 vue (au cours des 30 derniers jours)
I am working on modifying the example given in
"https://www.mathworks.com/help/deeplearning/ug/train-network-in-parallel-with-custom-training-loop.html?searchHighlight=sgdmupdate%20multiple%20gpu&s_tid=srchtitle_sgdmupdate%2520multiple%2520gpu_3".
The modification is that I am trying to incorporate learning rate and momentum into the sgdmupdate function.
That is,
[dlnet.Learnables,workerVelocity] = sgdmupdate(dlnet.Learnables,workerGradients,workerVelocity, learning rate, momentum);
instead of
[dlnet.Learnables,workerVelocity] = sgdmupdate(dlnet.Learnables,workerGradients,workerVelocity); (line 108 in the example).
However, that modification results in the following error in the case of using 4 GPUs (no error with a single GPU)
Would there be anyone who could help me on this ?
Thanks very much !!!
  3 commentaires
Joss Knight
Joss Knight le 10 Mar 2022
Also make sure all your gradients are finite with something like assert(all(cellfun(@(x)all(isfinite(x(:))),workerGradients))). Sometimes when you mess with the learn rate you can get NaNs and infinities in the gradients.
hyeonjin kim
hyeonjin kim le 1 Avr 2022
Dear Joss,
First of all, I am sorry to get back to you this late.
Thanks very much for your help. I will try to fix the problem according to your comments once our workstation with GPUs becomes available again.
Thanks very much !

Connectez-vous pour commenter.

Réponses (0)

Catégories

En savoir plus sur GPU Computing dans Help Center et File Exchange

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by