- Try freezing the weights of the original layers by setting the WeightLearnRateFactor and BiasLearnRateFactor to zero for convolution2dLayer and the same WeightLearnRateFactor & BiasLearnRateFactor for the fullyConnectedLayer too.
- Or retrain the complete network without freezing weights of any particular layers.
Modifying pretraind Neural Network
2 vues (au cours des 30 derniers jours)
Afficher commentaires plus anciens
Shai Kendler
le 1 Avr 2020
Réponse apportée : Srivardhan Gadila
le 8 Avr 2020
I plan to use a pretrained net such as alexnet, with input image of 227*227*5 . I exported the net to the Network Designer App. and changed the input and first convulution layers according to my requirements. I analyzed the archirecture and it seems perfect. Can I trust the new network to be a good starting point or I'm naive?
Thanks,
Shai
0 commentaires
Réponse acceptée
Srivardhan Gadila
le 8 Avr 2020
Since the convolution2dLayer and imageInputLayer have been replaced, the output of the ImageInputLayer would be different now because initially for the zero-center nomalization the mean used was different and also the features extracted/output from the replaced convolution layer would be different and may not be useful. If you are training the network on the new dataset with image input size 227*227*5 then above all doesn't matter. Instead if you are using it for feature extraction & your data is very different from the original data, then the features extracted deeper in the network might be less useful for your task.
Here are few suggestions while retrianing:
0 commentaires
Plus de réponses (0)
Voir également
Catégories
En savoir plus sur Image Data Workflows dans Help Center et File Exchange
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!