Error occurred when using conditional GAN function
Afficher commentaires plus anciens
Matlab version: R2023b
I try to mimic the official example: https://www.mathworks.com/help/signal/ug/generate-synthetic-pump-signals-using-conditional-generative-adversarial-network.html to create my own conditional GAN by myself.
I have prepared the data and label as "test.mat", and create the conditional GAN structure by referring to the example. The script is listed below:
clear;
%% Load the data
% LSTM_Reform_Data_SeriesData1_20210315_data001_for_GAN;
load('test.mat')
%% Generator Network
numFilters = 4;
numLatentInputs = 100;
projectionSize = [2 1 63];
numClasses = 2;
embeddingDimension = 100;
layersGenerator = [
imageInputLayer([1 1 numLatentInputs],'Normalization','none','Name','Input_Noise')
projectAndReshapeLayer(projectionSize,numLatentInputs,'ProjReshape');
concatenationLayer(3,2,'Name','Concate1');
transposedConv2dLayer([3 2],8*numFilters,'Stride',1,'Name','TransConv1') % 4*2*32
batchNormalizationLayer('Name','BN1','Epsilon',5e-5)
reluLayer('Name','Relu1')
transposedConv2dLayer([5 3],4*numFilters,'Stride',1,'Name','TransConv2') % 8*4*16
batchNormalizationLayer('Name','BN2','Epsilon',5e-5)
reluLayer('Name','Relu2')
transposedConv2dLayer([5 3],2*numFilters,'Stride',1,'Name','TransConv3') % 12*6*8
batchNormalizationLayer('Name','BN3','Epsilon',5e-5)
reluLayer('Name','Relu3')
transposedConv2dLayer([3 3],numFilters,'Stride',1,'Name','TransConv4') % 14*8*4
batchNormalizationLayer('Name','BN4','Epsilon',5e-5)
reluLayer('Name','Relu4')
transposedConv2dLayer([1 1],1,'Stride',1,'Name','TransConv5')
];
lgraphGenerator = layerGraph(layersGenerator);
layers = [
imageInputLayer([1 1],'Name','Input_Label','Normalization','none')
embedAndReshapeLayer(projectionSize(1:2),embeddingDimension,numClasses,'EmbedReshape1')];
lgraphGenerator = addLayers(lgraphGenerator,layers);
lgraphGenerator = connectLayers(lgraphGenerator,'EmbedReshape1','Concate1/in2');
subplot(1,2,1);
plot(lgraphGenerator);
dlnetGenerator = dlnetwork(lgraphGenerator);
%% Discriminator Network
scale = 0.2;
Input_Num_Feature = [14 8 1]; % The input data is [14 8 1]
layersDiscriminator = [
imageInputLayer(Input_Num_Feature,'Normalization','none','Name','Input_Data')
concatenationLayer(3,2,'Name','Concate2')
convolution2dLayer([3 3],8*numFilters,'Stride',1,'Name','Conv1')
leakyReluLayer(scale,'Name','LeakyRelu1')
convolution2dLayer([3 3],4*numFilters,'Stride',1,'Name','Conv2')
leakyReluLayer(scale,'Name','LeakyRelu2')
convolution2dLayer([3 3],2*numFilters,'Stride',1,'Name','Conv3')
leakyReluLayer(scale,'Name','LeakyRelu3')
convolution2dLayer([3 1],numFilters/2,'Stride',1,'Name','Conv4')
leakyReluLayer(scale,'Name','LeakyRelu4')
convolution2dLayer([3 1],numFilters/2,'Stride',1,'Name','Conv5')
leakyReluLayer(scale,'Name','LeakyRelu5')
convolution2dLayer([3 2],1,'Name','Conv6')
leakyReluLayer(scale,'Name','LeakyRelu6')
convolution2dLayer([2 1],1,'Name','Conv7')
];
lgraphDiscriminator = layerGraph(layersDiscriminator);
layers = [
imageInputLayer([1 1],'Name','Input_Label','Normalization','none')
embedAndReshapeLayer(Input_Num_Feature,embeddingDimension,numClasses,'EmbedReshape2')];
lgraphDiscriminator = addLayers(lgraphDiscriminator,layers);
lgraphDiscriminator = connectLayers(lgraphDiscriminator,'EmbedReshape2','Concate2/in2');
subplot(1,2,2);
plot(lgraphDiscriminator);
dlnetDiscriminator = dlnetwork(lgraphDiscriminator);
%% Train model
params.numLatentInputs = numLatentInputs;
params.numClasses = numClasses;
params.sizeData = [Input_Num_Feature length(Series_Fused_Label)];
params.numEpochs = 1000;
params.miniBatchSize = 256;
% Specify the options for Adam optimizer
params.learnRate = 0.0002;
params.gradientDecayFactor = 0.5;
params.squaredGradientDecayFactor = 0.999;
executionEnvironment = "cpu";
params.executionEnvironment = executionEnvironment;
trainNow = true;
if trainNow
% Train the CGAN
[dlnetGenerator,dlnetDiscriminator] = trainGAN(dlnetGenerator, dlnetDiscriminator,Series_Fused_Expand_Norm_Input,Series_Fused_Label,params);
else
% Use pretrained CGAN (default)
load(fullfile(tempdir,'PumpSignalGAN','GANModel.mat')) % load data set
end
However, the error occurred when I tried to run the script. The screen shot for the error message in command window is attached as "pic1".
I step-debugged the error and located the error happenning when the system tried to process the series functions, attached in "pic2".
Can someone help to clarify? It seems that some functions related to GAN has not been included in the "Deep Network Designer" as a standard module.
Réponse acceptée
Plus de réponses (0)
Catégories
En savoir plus sur Applications dans Centre d'aide et File Exchange
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!
