I'm getting an error error "using tall/cellfun" while doing a project listed in mathworks "Denoise Speech Using Deep Learning Networks".
26 vues (au cours des 30 derniers jours)
Afficher commentaires plus anciens
I don't know what happened. I see that many people in the community make mistakes because they did not add the 'HelperGenerateSpeechDenoisingFeatures' file, but I still made an error when I added it.
My code comes from https://www.mathworks.com/help/deeplearning/ug/denoise-speech-using-deep-learning-networks.html. Almost no changes have been made.
[noise_chaiYouJi,Fs]=audioread("noise.mp3");
datafolder = "dataset";
%ads0 = audioDatastore(fullfile(datafolder,"clips"));
ads0 = audioDatastore(fullfile(datafolder,"clips"),IncludeSubfolders=true);
%-------------------------------
windowLength = 256;
win = hamming(windowLength,"periodic");
overlap = round(0.75*windowLength);
fftLength = windowLength;
inputFs = 48e3;
fs = 8e3;
numFeatures = fftLength/2 + 1;
numSegments = 8;
%-------------------------------
%src = dsp.SampleRateConverter("InputSampleRate",44100,"OutputSampleRate",8000, "Bandwidth",7920);
src = dsp.SampleRateConverter(InputSampleRate=inputFs,OutputSampleRate=fs,Bandwidth=7920);
%-------------------------------
reset(ads0)
numSamples = numel(ads0.Files)
trainsamples=600;
ads0=subset(ads0,1:trainsamples);
T = tall(ads0)
[targets,predictors] = cellfun(@(x)HelperGenerateSpeechDenoisingFeatures(x,noise_chaiYouJi,src),T,UniformOutput=false);
[targets,predictors] = gather(targets,predictors);
predictors = cat(3,predictors{:});
noisyMean = mean(predictors(:));
noisyStd = std(predictors(:));
predictors(:) = (predictors(:) - noisyMean)/noisyStd;
targets = cat(2,targets{:});
cleanMean = mean(targets(:));
cleanStd = std(targets(:));
targets(:) = (targets(:) - cleanMean)/cleanStd;
predictors = reshape(predictors,size(predictors,1),size(predictors,2),1,size(predictors,3));
targets = reshape(targets,1,1,size(targets,1),size(targets,2));
inds = randperm(size(predictors,4));
L = round(0.99*size(predictors,4));
trainPredictors = predictors(:,:,:,inds(1:L));
trainTargets = targets(:,:,:,inds(1:L));
validatePredictors = predictors(:,:,:,inds(L+1:end));
validateTargets = targets(:,:,:,inds(L+1:end));
layers = [imageInputLayer([numFeatures,numSegments])
convolution2dLayer([9 8],18,Stride=[1 100],Padding="same")
batchNormalizationLayer
reluLayer
repmat( ...
[convolution2dLayer([5 1],30,Stride=[1 100],Padding="same")
batchNormalizationLayer
reluLayer
convolution2dLayer([9 1],8,Stride=[1 100],Padding="same")
batchNormalizationLayer
reluLayer
convolution2dLayer([9 1],18,Stride=[1 100],Padding="same")
batchNormalizationLayer
reluLayer],4,1)
convolution2dLayer([5 1],30,Stride=[1 100],Padding="same")
batchNormalizationLayer
reluLayer
convolution2dLayer([9 1],8,Stride=[1 100],Padding="same")
batchNormalizationLayer
reluLayer
convolution2dLayer([129 1],1,Stride=[1 100],Padding="same")
];
options = trainingOptions("adam", ...
MaxEpochs=3, ...
InitialLearnRate=1e-5, ...
MiniBatchSize=miniBatchSize, ...
Shuffle="every-epoch", ...
Plots="training-progress", ...
Verbose=false, ...
ValidationFrequency=floor(size(trainPredictors,4)/miniBatchSize), ...
LearnRateSchedule="piecewise", ...
LearnRateDropFactor=0.9, ...
LearnRateDropPeriod=1, ...
ValidationData={validatePredictors,permute(validateTargets,[3 1 2 4])});
denoiseNetFullyConvolutional = trainnet(trainPredictors,permute(trainTargets,[3 1 2 4]),layers,"mse",options);
filename = 'denoiseNet.mat';
save(filename, 'denoiseNetFullyConvolutional');
summary(denoiseNetFullyConvolutional)
%-------------------------------
0 commentaires
Réponses (1)
Govind KM
le 4 Déc 2024 à 9:32
From what I understand using Google Translate, the error message in the provided image when using tall/gather seems to be "Insufficent memory".
One of the differences between tall arrays and in-memory arrays in MATLAB is that tall arrays typically remain unevaluated until the user requests the calculations to be performed, enabling working with large data sets quickly without waiting for command execution. The gather function is used to evaluate the queued operations on a tall array, returning the result as an in-memory array. Since gather returns results as in-memory MATLAB arrays, standard memory considerations apply.
A possilble reason for the mentioned error could be that the dataset being used in your code is too large, causing MATLAB to run out of memory for the output from gather. To check whether the result can fit in memory, you can use
gather(head(X))
%or
gather(tail(X))
to perform the full calculation, but bring only the first or last few rows of the result into memory.
A subset of the dataset can be used to test the same code with smaller memory requirements. Another possible workaround is mentioned in this related MATLAB Answers post, which suggests bypassing the usage of tall :
Hope this is helpful!
1 commentaire
Voir également
Catégories
En savoir plus sur Detection dans Help Center et File Exchange
Produits
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!