Why is the data type not unified for custom training loops (dlarray) and internal training loops (array) in deep learning?
7 vues (au cours des 30 derniers jours)
Afficher commentaires plus anciens
[XTrain,TTrain] = japaneseVowelsTrainData;
inputSize = 12;
numHead = 10;
numHiddenUnits = 100;
numClasses = 9;
embeddingDimension = 50; %
numWords = 200 ;
layers = [
sequenceInputLayer(inputSize)
batchNormalizationLayer
peepholeLSTMLayer(numHiddenUnits,inputSize,OutputMode="last")
% lstmLayer(numHiddenUnits,'OutputMode','last')
batchNormalizationLayer
fullyConnectedLayer(numClasses)
softmaxLayer
classificationLayer];
for lstmLayer, the data type of forward function is array:

for peepholeLSTMLayer which is a custom defined layer, the data type of forward (predict) function is dlarray:

Why is the data type not unified for custom training loops (dlarray) and internal training loops (array) in deep learning?
It brings some trouble and inconvenience and I think it leads to corpulent as well.
What is puzzling is that: for internal layers (lstmLayer), there is no layer validating with auto-generated example inputs and forward function is used during training, however for user-defined layers, there is layer validating with auto-generated example inputs and predict but not forward function is used. Why is there the difference?
I think the deep learning tolbox of matlab is over-staffed, it is inconvenient and complicated for implementing deep leaning functions but should be concise and plain.
0 commentaires
Réponses (0)
Voir également
Catégories
En savoir plus sur Image Data Workflows dans Help Center et File Exchange
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!