Train Generative Adversarial Network (GAN)

5 vues (au cours des 30 derniers jours)
Bodo Rosenhahn
Bodo Rosenhahn le 29 Nov 2019
Réponse apportée : MEP le 23 Jan 2020
I played around with the GAN-Example at
and it works without any problems when just running the software.
I now modified the layersGenerator with an additional fullyConnectedLayer just after the imageInputLayer
imageInputLayer([1 1 numLatentInputs],'Normalization','none','Name','in')
fullyConnectedLayer(numLatentInputs,'Name','fc'); % <- this line is new, nothing special
transposedConv2dLayer(filterSize,8*numFilters,'Name','tconv1')
...
even though analyzeNework seems to be fine with it (the outcome also provides a [1 1 100] dimensional array
learning fails with the comment
"Invalid input data. Convolution operation is supported only for 1, 2, or 3 spatial dimensions."
Do you have any ideas or workarounds how to handle this?

Réponse acceptée

Raunak Gupta
Raunak Gupta le 2 Déc 2019
Hi,
The fullyConnectedLayer is generally used at the end of Network for generating single dimension output useful in Classification/Regression problem. As I understand the need of using fullyConnectedLayer in above scenario is to get a mapping from random input to a more complex linear transformation of input. Here, the fullyConnectedLayer will not return a 3-D output needed as input for transposedConv2dLayer that is why invalid input data error is coming.
I suggest defining the custom layer with the following template or using a convolution2dLayer which can return same output as mentioned ([1,1,100]) for this purpose.

Plus de réponses (3)

Bodo Rosenhahn
Bodo Rosenhahn le 2 Déc 2019
Hi,
many thanks for your fast reply. Regarding your comments, I partly agree and can follow them, unfortunatelly I am still confused:
"The fullyConnectedLayer is generally used at the end of Network for generating single dimension output useful in Classification/Regression problem. As I understand the need of using fullyConnectedLayer in above scenario is to get a mapping from random input to a more complex linear transformation of input. "
This is true, a FC-layer is usually used at the end of a network. In Autoencoders it is quite common to also have some FC-Layers in the bottle-neck link ... so that's why I am in general interested to be able to add a FC layer also in the middle of such a network.
"Here, the fullyConnectedLayer will not return a 3-D output needed as input for transposedConv2dLayer that is why invalid input data error is coming. "
When I run analyzeNetwork, it tells me that the FC-layer produces a 1x1x100 activation, which is exactly the same as the input data. So I can not follow why it produces an invalid input data for the transposedConv2D layer.
Indeed, I further played around and changed the input to 100x1x1 and again the FC-Layer produces a 1x1x100 activation. This seems counterintuitive for me, but more or less its fine for me to deal with that.
More confusing for me is, that I can add a regression layer at the end and generate a layerGraph. Here the training works without any problems when using the trainNetwork - Function. Since dlnetwork is derived from a layerGraph Object I can not follow why it is causing a problem.
Also the analyzeNetwork tool does not indicate any problems at this stage.
Using a 1x1x100 convolution layer only works partially. When arranging the input data to 100x1x1 I can use it to generate a 1x1x100 activation and then continue with the other layers. But it does not work with the FC layer, since it already produces a 1x1x100 output.
Can you give me some further hints what happens in the dlnetwork structure?
Many thanks,
  1 commentaire
Raunak Gupta
Raunak Gupta le 2 Déc 2019
Hi,
The FC layer will only generate a single dimension output, you may check by
outputSize = lgraphGenerator.Layers(2).OutputSize
Here Layers(2) represents FC layer. analyzeNetwork will say that its dimension as 1x1x100 but its actually 100. At the end part of Network these single dimension output doesn't cause any error. The FC layer is designed in such way that it is usable only at the end part because its usability. As for the bottleneck layers of Autoencoder, currently we have segnetLayers and unetLayers which does not contain FC Layers in bottleneck part.
That is why I suggested moving to convolution2dLayer to acheive the same functionality.
For dlnetwork it is used so that writing custom loops won't create problems related to dlarrays.
Hope this clarifies some queries.

Connectez-vous pour commenter.


Bodo Rosenhahn
Bodo Rosenhahn le 4 Déc 2019
Hi,
thanks for the suggestion about using con2D Layers. I implemented this version and it resolves my issue.
  1 commentaire
Raunak Gupta
Raunak Gupta le 5 Déc 2019
Hi,
You have accepted your Answer, It will mislead any user coming to this question as your answer didn't provide any solution. It would be helpful if you unaccept above answer and accept the actual answer if that one worked for you.

Connectez-vous pour commenter.


MEP
MEP le 23 Jan 2020
Hi, i need to use the GAN but my input data is a matrix of acceleration signals where each row contain the samples of the signal. So in each row i have different signals, how can i put it in input? I can put it as a matrix or i need to separate the matrix in array and put single array? Thanks...

Catégories

En savoir plus sur Image Data Workflows dans Help Center et File Exchange

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by