How to Multiple output regression
52 vues (au cours des 30 derniers jours)
Afficher commentaires plus anciens
I want to know how to custom regression training loop (multiple output).
I want to get simple code example about custom multiple output regression.
Every ex is about cnn but i just desire DNN Thank u for reading my question :)
0 commentaires
Réponses (1)
Raynier Suresh
le 17 Fév 2021
The below code will give you an example on how to create and train a custom network with multiple regression output.
%% Create the network with multiple output
layers = [imageInputLayer([28 28 1],'Normalization','none','Name','in')
fullyConnectedLayer(1,'Name','fc1')];
lgraph = layerGraph(layers);
lgraph = addLayers(lgraph,fullyConnectedLayer(1,'Name','fc2'));
lgraph = connectLayers(lgraph,'in','fc2');
figure
plot(lgraph)
dlnet = dlnetwork(lgraph);
%% Training Data
XTrain = rand(28,28,1,50);% Input data (50 images of size 28x28x1)
YTrain1 = randi(10,50,1); % Regression Output data for Output 1
YTrain2 = randi(10,50,1); % Regression Output data for Output 2
dsXTrain = arrayDatastore(XTrain,'IterationDimension',4);
dsYTrain1 = arrayDatastore(YTrain1);
dsYTrain2 = arrayDatastore(YTrain2);
dsTrain = combine(dsXTrain,dsYTrain1,dsYTrain2);
%% Train the Network
numEpochs = 3;
miniBatchSize = 128;
plots = "training-progress";
mbq = minibatchqueue(dsTrain,'MiniBatchSize',miniBatchSize,'MiniBatchFcn', @preprocessData,'MiniBatchFormat',{'SSCB','',''});
if plots == "training-progress"
figure
lineLossTrain = animatedline('Color',[0.85 0.325 0.098]);
ylim([0 inf])
xlabel("Iteration");ylabel("Loss");grid on
end
trailingAvg = [];
trailingAvgSq = [];
iteration = 0;
start = tic;
% Loop over epochs.
for epoch = 1:numEpochs
% Shuffle data.
shuffle(mbq)
% Loop over mini-batches
while hasdata(mbq)
iteration = iteration + 1;
[dlX,dlY1,dlY2] = next(mbq);
% Evaluate the model gradients, state, and loss using dlfeval and the
% modelGradients function.
[gradients,state,loss] = dlfeval(@modelGradients, dlnet, dlX, dlY1, dlY2);
dlnet.State = state;
% Update the network parameters using the Adam optimizer.
[dlnet,trailingAvg,trailingAvgSq] = adamupdate(dlnet,gradients,trailingAvg,trailingAvgSq,iteration);
% Display the training progress.
if plots == "training-progress"
D = duration(0,0,toc(start),'Format','hh:mm:ss');
addpoints(lineLossTrain,iteration,double(gather(extractdata(loss))))
title("Epoch: " + epoch + ", Elapsed: " + string(D))
drawnow
end
end
end
%% Necessary function to train the network
function [gradients,state,loss] = modelGradients(dlnet,dlX,T1,T2)
[dlY1,dlY2,state] = forward(dlnet,dlX,'Outputs',["fc1" "fc2"]);
lossT1 = mse(dlY1,T1);
lossT2 = mse(dlY2,T2);
loss = 0.1*lossT1 + 0.1*lossT2;
gradients = dlgradient(loss,dlnet.Learnables);
end
function [X,Y1,Y2] = preprocessData(XCell,Y1Cell,Y2Cell)
X = cat(4,XCell{:});
Y1 = cat(2,Y1Cell{:});
Y2 = cat(2,Y2Cell{:});
end
For more information you can refer the below links
3 commentaires
Cheng Qiu
le 28 Sep 2021
Modifié(e) : KSSV
le 21 Sep 2022
layers = [
imageInputLayer([32 1],'Name','input','Normalization','none')
fullyConnectedLayer(Nhide,"Name","FC1")
reluLayer("Name","Relu1")
fullyConnectedLayer(Nhide,"Name","FC2")
dropoutLayer(0.5,"Name","DO")
fullyConnectedLayer(outputSize,"Name","FC3")
reluLayer('Name','Relu2')];
lgraph = layerGraph(layers);
dlnet = dlnetwork(lgraph);
% Training Option
numEpochs = 1e3;
miniBatchSize = 32;
initialLearnRate = 0.001;
decay = 0.01;
momentum = 0.9;
plots = "training-progress";
executionEnvironment = "auto";
if plots == "training-progress"
figure
lineLossTrain = animatedline('Color',[0.85 0.325 0.098]);
ylim([0 inf])
xlabel("Iteration")
ylabel("Loss")
grid on
end
%% Training The Network
numObservations = numel(output);
numIterationsPerEpoch = floor(numObservations./miniBatchSize);
iteration = 0;
start = tic;
% Loop over epochs.
for epoch = 1:numEpochs
% Shuffle data.
idx = randperm(numel(OutPower(:,1)));
input = input(:,:,:,idx);
output = output(:,idx);
% Loop over mini-batches.
for i = 1:numIterationsPerEpoch
iteration = iteration + 1;
% Read mini-batch of data and convert the labels to dummy
% variables.
idx = (i-1)*miniBatchSize+1:i*miniBatchSize;
X = input(:,:,:,idx);
Y1 = output(1,idx);
Y2 = output(2,idx);
Y3 = output(3,idx);
Y4 = output(4,idx);
% Convert mini-batch of data to dlarray.
dlX = dlarray(X,'SSCB');
dlY1= dlarray(Y1,'SB');
dlY2= dlarray(Y2,'SB');
dlY3= dlarray(Y3,'SB');
dlY4= dlarray(Y4,'SB');
% dlY = dlarray(Y,'SSCB');
% If training on a GPU, then convert data to gpuArray.
if (executionEnvironment == "auto" && canUseGPU) || executionEnvironment == "gpu"
dlX = gpuArray(dlX);
end
% Evaluate the model gradients, state, and loss using dlfeval and the
% modelGradients function and update the network state.
[gradients,state,loss] = dlfeval(@modelGradients,dlnet,dlX,dlY1,dlY2,dlY3,dlY4);
dlnet.State = state;
function [gradients,state,loss] = modelGradients(dlnet,dlX,Y1,Y2,Y3,Y4)
[dlYPred,state] = forward(dlnet,dlX);
loss = sqrt((dlYPred(1)-Y1).^2+(dlYPred(2)-Y2).^2+(dlYPred(3)-Y3).^2+(dlYPred(4)-Y4).^2)/2;
gradients = dlgradient(loss,dlnet.Learnables);
end
Nabil Farah
le 19 Sep 2024
This works for MIMO in image recognization as in the example below:
But, when I try it based on other Data, it did not work,
XTrain = rand(20001,3);% Input data ( size 20001x3 double)
TTrain=rand(20001,2);% Input data ( size 20001x2 double)
%% Create the network with multiple output
layers=[featureInputLayer(3,Name="input")
fullyConnectedLayer(32)
reluLayer
fullyConnectedLayer(2)
];
lgraph = layerGraph(layers);
figure
plot(lgraph)
dlnet = dlnetwork(lgraph);
%%training options
options = trainingOptions("adam", ...
Plots="training-progress", ...
Verbose=false);
%% loss function
lossFcn = @(Y1,Y2,T1,T2) crossentropy(Y1,T1) + 0.1*mse(Y2,T2);
%Train the neural network.
net = trainnet(dsTrain,net,lossFcn,options);
I always get this error:
Error calling function during training.
Error in untitled4 (line 26)
net = trainnet(dsTrain,layers,lossFcn,options);
Caused by:
Not enough input arguments.
Error in untitled4>@(Y1,Y2,T1,T2)crossentropy(Y1,T1)+0.1*mse(Y2,T2) (line 24)
lossFcn = @(Y1,Y2,T1,T2) crossentropy(Y1,T1) + 0.1*mse(Y2,T2);
Error in nnet.internal.cnn.util.UserCodeException.fevalUserCode (line 11)
[varargout{1:nargout}] = feval(F, varargin{:});
Voir également
Catégories
En savoir plus sur Custom Training Loops dans Help Center et File Exchange
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!