How Can I Modify Weights in a SeriesNetwork of a CNN Model ?

66 vues (au cours des 30 derniers jours)
Reza Akbari
Reza Akbari le 20 Fév 2018
Commenté : Ali Al-Saegh le 10 Mar 2021
Hi, Everyone. I have run a CNN example of Matlab R2017b to classify images of digits 0-9 and the code of this example is shown below :
if true
% Loading Data and Splitting them to the Training and Testing Data Set
digitDatasetPath = fullfile(matlabroot,'toolbox','nnet','nndemos',...
'nndatasets','DigitDataset');
digitData = imageDatastore(digitDatasetPath,...
'IncludeSubfolders',true,'LabelSource','foldernames');
trainingNumFiles = 750;
rng(1) % For reproducibility
[trainDigitData,testDigitData] = splitEachLabel(digitData,...
trainingNumFiles,'randomize');
% Defining Layers
layers = [imageInputLayer([28 28 1]);
convolution2dLayer(5,20);
reluLayer();
maxPooling2dLayer(2,'Stride',2);
fullyConnectedLayer(10);
softmaxLayer();
classificationLayer()];
% Training the convnet
options = trainingOptions('sgdm','MaxEpochs',20,...
'InitialLearnRate',0.0001);
convnet = trainNetwork(trainDigitData,layers,options);
end
Actually, I want to modify weights of this Network, for example, I would like to Multiply weights of first convolutional layer by 0.5, but I have received this error:
"You cannot set the read-only property 'Layers' of SeriesNetwork"
Is there any solution for this problem?
Thank you for advising me on this topic.
Reza.

Réponses (4)

Peter Gadfort
Peter Gadfort le 8 Juin 2018
If you save the network to a struct, you can edit it there and then load it back into the network (this works in 2018a, I have not tested in any other version yet)
tmp_net = convnet.saveobj;
tmp_net.Layers(2).Weights = 0.5 * tmp_net.Layers(2).Weights;
convnet = convnet.loadobj(tmp_net);
classify(convnet, testDigitData);
This will allow you to edit the Weights and Biases without retraining the network.
  1 commentaire
EREZ MANOR
EREZ MANOR le 4 Juil 2018
This option of "saveobj" works, and seems as the best solution. Thank you.

Connectez-vous pour commenter.


Carlo Tomasi
Carlo Tomasi le 21 Fév 2018
While the Layers property of a SeriesNetwork is read-only, the weights and biases of a net.cnn.layer.Layer can be set at will. Thus, one way to address your problem is to copy all the weights from convnet.Layers to layers, modify the weights of layers as you wish, and then make a new convnet with trainNetwork. For instance:
for l = 1:length(layers)
if isprop(layers(l), 'Weights') % Does layer l have weights?
layers(l).Weights = convnet.Layers(l).Weights;
end
if isprop(layers(l), 'Bias')% Does layer l have biases?
layers(l).Bias = convnet.Layers(l).Bias;
end
end
layers(2).Weights = 0.5 * layers(2).Weights; % Modify the weights in the second layer
convnet = trainNetwork(trainDigitData,layers,options); % Make a new network and keep training
Of course, this code assumes that the structures of layers and convnet.Layers match. I hope this helps.
  4 commentaires
Kirill Korotaev
Kirill Korotaev le 25 Avr 2018
Hello Jan, I am now trying to implement 'node KO' treatment, should I do something else except simply setting bias to 0? I have no effect on classification accuracy even when I set to 0 every bias in 1st convlayer, probably I am doing it wrong
Theron FARRELL
Theron FARRELL le 3 Déc 2019
I think that Kirill may set WeightLearnRateFactor and BiasLearnRateFactor of corresponding layers 0 so that parameters will not be updated through training.
As for Jan Jaap van Assen 's issue, I suppoe that you may try a workaround by adjusting the traning epochs to a point at which you wish to manipulate those paramters, save and reload your network at that point, specify parameters manually, for example
% W and b can be user-specified
layer = convolution2dLayer(filterSize,numFilters, ...
'Weights',W, ...
'Bias',b)
then start training again...

Connectez-vous pour commenter.


Jan Jaap van Assen
Jan Jaap van Assen le 23 Avr 2018
I was a bit annoyed by this very basic restriction where we can't manually set weights. Using Carlo Tomasi's answer as inspiration I wrote a function that allows you to replace weights for layers of DAGNetworks. Sadly, it is required to call trainNetwork, which massively slows down the process. It only changes the weights you specify, all the other weights stay unchanged in the new net object.
For example, to set the weights of layer 5 all to zero:
newNet = replaceWeights(oldNet,5,zeros(size(oldNet.Layers(5).Weights)));
See the code below, I also attached the function file. I hope somebody can correct me with faster solutions.
function newNet = replaceWeights(oldNet,layerID,newWeights)
%REPLACEWEIGHTS Replace layer weights of DAGNetwork
% newNet = replaceWeights(oldNet,layerID,newWeights)
% oldNet = the DAGnetwork you want to replace weights.
% layerID = the layer number of which you want to replace the weights.
% newWeights = the matrix with the replacement weights. This should be
% the original weights size.
% Split up layers and connections
oldLgraph = layerGraph(oldNet);
layers = oldLgraph.Layers;
connections = oldLgraph.Connections;
% Set new weights
layers(layerID).Weights = newWeights;
% Freeze weights, from the Matlab transfer learning example
for ii = 1:size(layers,1)
props = properties(layers(ii));
for p = 1:numel(props)
propName = props{p};
if ~isempty(regexp(propName, 'LearnRateFactor$', 'once'))
layers(ii).(propName) = 0;
end
end
end
% Build new lgraph, from the Matlab transfer learning example
newLgraph = layerGraph();
for i = 1:numel(layers)
newLgraph = addLayers(newLgraph,layers(i));
end
for c = 1:size(connections,1)
newLgraph = connectLayers(newLgraph,connections.Source{c},connections.Destination{c});
end
% Very basic options
options = trainingOptions('sgdm','MaxEpochs', 1);
% Note that you might need to change the label here depending on your
% network in my case '1' is a valid label.
newNet = trainNetwork(zeros(oldNet.Layers(1).InputSize),1,newLgraph,options);
end

Hakan
Hakan le 1 Déc 2019
I would like ask another question. I am thinking it is related to this subject. I would like to make int8 Quantization to the weights. I read this article:
I used Carlo Tomasi method (one way to address your problem is to copy all the weights from convnet.Layers to layers, modify the weights of layers as you wish) And used fi function to make the 8 bit Quantization.
and then tried to make changes as fallows:
layers(2).Weights(2) = fi(layers(2).Weights(2),1,8);
I tried just for one weight to see if it changes from single to int8. The simple code above works with no error but the weight type is still single. Is there any way to make the wieght int8?
  3 commentaires
David Haas
David Haas le 9 Mar 2021
Hello Hakan,
The int8-quantization Technical article you linked to states the following:
However, the execution time on the discovery board shows that the single-precision variant takes an average of 14.5 milliseconds (around 69 fps) to run while the scaled version is a little slower and takes an average of 19.8 milliseconds (around 50 fps). This might be because of the overhead of the casts to single precision, as we are still doing the computations in single precision (Figure 7).
So basically, it is an 8-bit accurate number and it is cast back into single precision.
It is not clear to me why you need it to be a true 8 bit integer for evaluating the quantization as long as it has equivalent 8-bit accuracy for evaluation.
Ali Al-Saegh
Ali Al-Saegh le 10 Mar 2021
Hello David,
The purpose of using int8 quantization method is to reduce the memory requirement of the neural network, which can be up to 75% of its original size. However, the obtained network after quantization shows the same size as the original one. If the memory size is not reduced and more time is required for the inference process, so what is the benefit of quantization?
I really appreciate your help in giving clarification for that.

Connectez-vous pour commenter.

Catégories

En savoir plus sur Parallel and Cloud dans Help Center et File Exchange

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by