Dividerand - neural network training

I'm using a neural network model and for some simulations I've used the function 'dividerand' instead of 'divideind'. Without using the command [net,tr]=train(net,.....) where I could check how the data were randomly separated in training, validation and testing datasets, is it possible to verify how the random separation was executed (analysing the indices)?

 Réponse acceptée

Greg Heath
Greg Heath le 26 Juin 2012

0 votes

If you do not include tr as a training output, the only way to obtain the dividerand indices is to call dividerand before calling newff/newfit/fitnet/patternnet/feedforwardnet and nullify it's use within those net creation functions.
However, if all you are concerned about is duplicating a run, just specify the same random number seed before calling the creation function.
Hope this helps.
Greg

Plus de réponses (4)

Sean de Wolski
Sean de Wolski le 22 Juin 2012

0 votes

I am not sure I understand. You can analyze the indces just by looking at the outputs from dividerannd. Can you please clarify your question a little further and/or provide a short example?
Greg Heath
Greg Heath le 22 Juin 2012

0 votes

The structure tr in the double output [net tr ] = train(net,x,t); will contain the train/val/test indices.
Greg
JSousa Sousa
JSousa Sousa le 25 Juin 2012

0 votes

Thanks for your prompt answers Sean de Wolski and Greg Heath.
In some simulations I've just used the command net= train(net,...), without specifying an output tr where I could have a structure with the train/val/test indices. In that way, is that any possibility to verify how the data were effectively split in different substets?
I know that to repeat a simulation using the same scenario it must be well known: - the network weights/biases initialization (using for example the command revert to obtain the initial parameters) and - the data separation criteria (difficult to me to recognize without the tr structure).
Thanks for your time and attention
João.
JSousa Sousa
JSousa Sousa le 26 Juin 2012

0 votes

I have a saved neural network model with good results that I would like to analyse in more detail. So the point is well understood, I'm interested to duplicate this training process (another similar run). However without a tr structure and also without the random number seed saved for this case I think I won't get exactly the same data division.
Do you know how to recover this seed?
Thanks again
JSousa.

1 commentaire

Greg Heath
Greg Heath le 26 Juin 2012
No.
You should always specify the random number seed/state before calling a function that uses random numbers.
The only thing you can do now is to use a loop to create a lot of designs and try to find one that has a similar performance to the original.
Greg

Connectez-vous pour commenter.

Catégories

En savoir plus sur Deep Learning Toolbox dans Centre d'aide et File Exchange

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by