How can I freeze layers for training a network with multiple outputs and reduce time for the training?
Afficher commentaires plus anciens
Since I try to train the model with multiple outputs, I have an issue to use 'deepNetworkDesigner' and its Training option.
So, I follow the direct training method on this example with freezing net's layers using the code below
lgraph = net.layerGraph;
target = 290
for i = 1 : target
try
L = freezeWeights(lgraph.Layers(i));
lgraph = replaceLayer(lgraph,lgraph.Layers(i).Name,L);
catch
end
end
net = dlnetwork(lgraph)
I checked WeightLearnRateFactor and BiasLearnRateFactor become zero, which means the layers are frozen.
However, it still takes too much time on the stage of training.
So,
Q1: is this the right way to freeze layers for the training multiple output network?
Q2: How can I reduce time for training, ignoring the layers which are frozen.
Here is the base code from example that I used for network training
[loss,gradients,state] = dlfeval(@modelLoss,net,X,T1,T2);
function [loss,gradients,state] = modelLoss(net,X,T1,T2)
[Y1,Y2,state] = forward(net,X,Outputs=["softmax" "fc_2"]);
lossLabels = crossentropy(Y1,T1);
lossAngles = mse(Y2,T2);
loss = lossLabels + 0.1*lossAngles;
gradients = dlgradient(loss,net.Learnables);
end
Réponse acceptée
Plus de réponses (0)
Catégories
En savoir plus sur Custom Training Using Automatic Differentiation dans Centre d'aide et File Exchange
Produits
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!