Unexpected loss reduction using custom training loop in Deep Learning Toolbox
Afficher commentaires plus anciens
I have created a custom training loop following the documentation example: https://www.mathworks.com/help/releases/R2023a/deeplearning/ug/train-network-using-custom-training-loop.html
However, since I use the same loss function for training and validation, I have altered the "modelloss" function so the "forward" function is outside of the function. For example:
[Y, state] = forward(net, X)
[loss,gradient] = dlfeval(@modelLoss,net,Y,T);
function [loss,gradients] = modelLoss(net,Y,T)
% Calculate cross-entropy loss.
loss = crossentropy(Y,T);
% Calculate gradients of loss with respect to learnable parameters.
gradients = dlgradient(loss,net.Learnables);
end
Now the resulting loss during training is not reducing as expected. How can I resolve this issue?
Réponse acceptée
Plus de réponses (0)
Catégories
En savoir plus sur Operations dans Centre d'aide et File Exchange
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!