Question about the Matlab Wasserstein GAN example

5 vues (au cours des 30 derniers jours)
Mohammed
Mohammed le 21 Mai 2024
Modifié(e) : Malay Agarwal le 22 Mai 2024
The original Wasserstein gan paper suggest removing the Critic's last dense layer activation function(sigmoid) such that the output value is not limited to fake or real. The posted example still uses sigmoid layer, am I right?

Réponses (1)

Malay Agarwal
Malay Agarwal le 22 Mai 2024
Modifié(e) : Malay Agarwal le 22 Mai 2024
The diagram of the Discriminator model in the example (https://www.mathworks.com/help/deeplearning/ug/trainwasserstein-gan-with-gradient-penalty-wgan-gp.html) shows that the model does have a “sigmoid” layer at the end:
This can also be confirmed by looking at how the Discriminator model is defined:
layersD = [
imageInputLayer(inputSize,Normalization="none")
convolution2dLayer(filterSize,numFilters,Stride=2,Padding="same")
leakyReluLayer(scale)
convolution2dLayer(filterSize,2*numFilters,Stride=2,Padding="same")
layerNormalizationLayer
leakyReluLayer(scale)
convolution2dLayer(filterSize,4*numFilters,Stride=2,Padding="same")
layerNormalizationLayer
leakyReluLayer(scale)
convolution2dLayer(filterSize,8*numFilters,Stride=2,Padding="same")
layerNormalizationLayer
leakyReluLayer(scale)
convolution2dLayer(4,1)
sigmoidLayer]; % Notice the sigmoid layer at the end
Hope this helps!

Catégories

En savoir plus sur Networks dans Help Center et File Exchange

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by