Error using trainNetwork (line 184) Conversion to single from struct is not possible.

I am trying to implement the resnet50 on signal dataset.I have a database in which I have 10 folders(Each folder has 12 subfolders). Each file has dimensions 656x875x2 which is a .mat file. While running resnet on the above data I am facing below attached error. Can someone help me out?
location = 'D:\data-11\sir task\New folder\';
imds = imageDatastore(location, 'FileExtensions', '.mat', 'IncludeSubfolders',0, ...
'LabelSource','foldernames',...
'ReadFcn',@matReader);
[imdsTrain,imdsValidation] = splitEachLabel(imds,0.7, 'randomized');
net = lgraph_1;
inputSize = lgraph_1.Layers(1).InputSize;
[learnableLayer,classLayer] = findLayersToReplace(lgraph_1);
[learnableLayer,classLayer]
numClasses = numel(categories(imdsTrain.Labels));
if isa(learnableLayer,'nnet.cnn.layer.FullyConnectedLayer')
newLearnableLayer = fullyConnectedLayer(numClasses, ...
'Name','new_fc', ...
'WeightLearnRateFactor',10, ...
'BiasLearnRateFactor',10);
elseif isa(learnableLayer,'nnet.cnn.layer.Convolution2DLayer')
newLearnableLayer = convolution2dLayer(1,numClasses, ...
'Name','new_conv', ...
'WeightLearnRateFactor',10, ...
'BiasLearnRateFactor',10);
end
lgraph_1 = replaceLayer(lgraph_1,learnableLayer.Name,newLearnableLayer);
newClassLayer = classificationLayer('Name','new_classoutput');
lgraph_1 = replaceLayer(lgraph_1,classLayer.Name,newClassLayer);
miniBatchSize = 128;
valFrequency = floor(numel(imdsTrain.Files)/miniBatchSize);
checkpointPath = pwd;
options = trainingOptions('sgdm', ...
'MiniBatchSize',miniBatchSize, ...
'MaxEpochs',100, ...
'InitialLearnRate',1e-3, ...
'Shuffle','every-epoch', ...
'ValidationData',imdsValidation, ...
'ValidationFrequency',valFrequency, ...
'Verbose',false, ...
'Plots','training-progress', ...
'CheckpointPath',checkpointPath);
net = trainNetwork(imdsTrain,lgraph_1,options);

9 commentaires

What is class of imdsTrain ? WE cannot check as data is not there with us.
class(imdsTrain)
ans =
'matlab.io.datastore.ImageDatastore'
I also have follow th e below link but facing the issue
I suspect that it is trying to interpret your options structure as if it were some other kind of data.
What version you are using?
Please show your code for matReader()
sure
function S = matReader(filename)
S = load(filename);
end

Connectez-vous pour commenter.

 Réponse acceptée

When you load() a .mat file and assign the value to a variable, what you get back is a struct with one field for each variable in the file. You need to examine fieldnames(S) and decide which variable to extract from the struct. (The task is of course easier if all of the files contain the same variable name.)

15 commentaires

yes you are right i have the following code which create the dataset but i don't know how to avoid saving in struct.
clc
clear all
for i = 100:5000
x = imread(['D:\data-11\sir task\4ASKi\snr30i4ASK' num2str(i) '.png']);
y = imread(['D:\data-11\sir task\4ASKq\snr30q4ASK' num2str(i) '.png']);
x = rgb2gray(x);
y = rgb2gray(y);
c(:,:,1) = x;
c(:,:,2) = y;
%c = cell2mat(struct2cell(c));
p = 'D:\data-11\sir task\4ASK'
cd(p)
save(['snr304ASKiq' num2str(i) '.mat'], 'c');
cd ..
close
end
can you please tell me Walter?
Thanks alot Walter it work. keep helping
Righ now i am facing another issue can you please help me
location = 'D:\data-11\sir task\New folder\';
imds = imageDatastore(location, 'FileExtensions', '.mat', 'IncludeSubfolders',1, ...
'LabelSource','foldernames',...
'ReadFcn',@matReader);
[imdsTrain,imdsValidation] = splitEachLabel(imds,0.7, 'randomized');
net = lgraph_1;
inputSize = lgraph_1.Layers(1).InputSize;
[learnableLayer,classLayer] = findLayersToReplace(lgraph_1);
[learnableLayer,classLayer]
numClasses = numel(categories(imdsTrain.Labels));
if isa(learnableLayer,'nnet.cnn.layer.FullyConnectedLayer')
newLearnableLayer = fullyConnectedLayer(numClasses, ...
'Name','new_fc', ...
'WeightLearnRateFactor',10, ...
'BiasLearnRateFactor',10);
elseif isa(learnableLayer,'nnet.cnn.layer.Convolution2DLayer')
newLearnableLayer = convolution2dLayer(1,numClasses, ...
'Name','new_conv', ...
'WeightLearnRateFactor',10, ...
'BiasLearnRateFactor',10);
end
lgraph_1 = replaceLayer(lgraph_1,learnableLayer.Name,newLearnableLayer);
newClassLayer = classificationLayer('Name','new_classoutput');
lgraph_1 = replaceLayer(lgraph_1,classLayer.Name,newClassLayer);
miniBatchSize = 2;
valFrequency = floor(numel(imdsTrain.Files)/miniBatchSize);
checkpointPath = pwd;
options = trainingOptions('sgdm', ...
'MiniBatchSize',miniBatchSize, ...
'MaxEpochs',100, ...
'InitialLearnRate',1e-3, ...
'Shuffle','every-epoch', ...
'ValidationData',imdsValidation, ...
'ValidationFrequency',valFrequency, ...
'Verbose',false, ...
'Plots','training-progress', ...
'CheckpointPath',checkpointPath);
net = trainNetwork(imdsTrain,lgraph_1,options);
error
error using trainNetwork(line 184)
This is the error "Maximum variable size allowed by the program is exceeded".
start by reducing the minibatchsize to 1. I am not sure if that will be enough... I cannot tell how large your files are, but you do have 4901 of them. At the moment I do not know how to estimate the memory requirements.
Training is started at minibatchsize 2 but it will take long time fro training. I have 1080ti has 11GB memory GPU. but getting the memory error.
This is the error thrown when MATLAB is asked to create an array with more than 2^31 elements. Probably there are some unfeasibly large activations in the middle of the network. It might be interesting to get a look at the call stack. Try
getReport(MException.last.UnderlyingCause)
after the error is thrown.
This is the error thrown when MATLAB is asked to create an array with more than 2^31 elements.
A = ones(1, 2^32);
Error using ones
Requested 1x4294967296 (32.0GB) array exceeds maximum array size preference (31.0GB). This might cause MATLAB to become unresponsive.
No, cause is not quite that.
A = ones(1, 2^48, 'uint8');
Error using ones
Requested array exceeds the maximum possible variable size.
And not quite that either. (2^48-1 is the maximum possible for x64 architecture)
If you turn off "Limit array size to a fraction of memory" in preferences then
>> A = ones(1, 2^48-1, 'uint8');
Out of memory.
but 2^48 still gives maximum variable size error.
I went back and tested in R2021a as the poster is using that, but the messages are the same.
If I recall correctly, the wording was different in some of the older versions, so the "Maximum variable size allowed by the program is exceeded" might be for an older version.
@john karli is it possible that you are using a non-English version and translated the message?
The array size limit applies to gpuArrays rather than CPU arrays.
>> X = ones(2^31,1,'uint8','gpuArray');
Error using ones
Maximum variable size allowed on the device is exceeded.
>> X = ones(2^31-1,1,'uint8','gpuArray');
>>
I don't know why the user's error message says 'program' rather than 'device' hence wanting to look at the call stack. It's possible, as you say, that the translation back into English from source was wrong.
I have 11 classes and one class contain 5000 sample of mat file and one mat file have dimension of 656x875x2. how do i avoid
Maximum variable size allowed by the program is exceeded error
@Joss Knight i also tried
>> getReport(MException.last.UnderlyingCause)
ans =
'Error using nnet.internal.cnngpu.convolveForward2D
Maximum variable size allowed on the device is exceeded.
Error in nnet.internal.cnn.layer.util.Convolution2DGPUStrategy/forward (line 35)
Z = nnet.internal.cnngpu.convolveForward2D( ...
Error in nnet.internal.cnn.layer.Convolution2D/doForward (line 503)
Z = this.ExecutionStrategy.forward(X, weights, bias, ...
Error in nnet.internal.cnn.layer.Convolution2D/forwardNormal (line 428)
Z = this.doForward(X,this.Weights.Value,this.Bias.Value);
Error in nnet.internal.cnn.layer.Convolution2D/forward (line 209)
Z = this.forwardNormal( X );
Error in nnet.internal.cnn.DAGNetwork>@()this.Layers{i}.forward(XForThisLayer) (line 365)
@() this.Layers{i}.forward( XForThisLayer ), ...
Error in nnet.internal.cnn.util.executeWithStagedGPUOOMRecovery (line 11)
[ varargout{1:nOutputs} ] = computeFun();
Error in nnet.internal.cnn.DAGNetwork>iExecuteWithStagedGPUOOMRecovery (line 1565)
[varargout{1:nargout}] = nnet.internal.cnn.util.executeWithStagedGPUOOMRecovery(varargin{:});
Error in nnet.internal.cnn.DAGNetwork/forwardPropagationWithMemory (line 364)
[outputActivations, memory] = iExecuteWithStagedGPUOOMRecovery( ...
Error in nnet.internal.cnn.DAGNetwork/computeGradientsForTraining (line 717)
this.forwardPropagationWithMemory( X, ...
Error in nnet.internal.cnn.Trainer/computeGradients (line 203)
[gradients, predictions, states] = net.computeGradientsForTraining(X, Y, propagateState);
Error in nnet.internal.cnn.Trainer/train (line 122)
[gradients, predictions, states] = this.computeGradients(net, X, response, propagateState);
Error in nnet.internal.cnn.trainNetwork.doTrainNetwork (line 112)
trainedNet = trainer.train(trainedNet, trainingDispatcher);
Error in trainNetwork (line 182)
[trainedNet, info] = nnet.internal.cnn.trainNetwork.doTrainNetwork(factory,varargin{:});
Error in LiveEditorEvaluationHelperE287706930 (line 41)
net = trainNetwork(imdsTrain,lgraph_1,options);'
bytes_required = (5000 * 0.7) * 656*875*2 * 8
bytes_required = 3.2144e+10
gigabytes_required = bytes_required / 2^30
gigabytes_required = 29.9364
You are almost certainly running out of memory.
should i resize the 656x875x2 to 256x256x2?

Connectez-vous pour commenter.

Plus de réponses (0)

Catégories

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by