Effacer les filtres
Effacer les filtres

Mini batch size changing value during gradient descent

2 vues (au cours des 30 derniers jours)
Arthur CASSOU
Arthur CASSOU le 27 Juil 2022
Hello everyone,
I am currently working on multimodal deep learning, with a neural network classifier receiving two time-dependent inputs, videos and a set of given features. Videos are 4D matrixes of size width x height x depth x frames and features are 2D matrixes of size number of features x frames.
I've been trying to classify the inputs based on the examples given below, as on some of my previous work.
During my training, I have come across a very singular situation. The value of minibatchsize, which I had initially set to 16, was decreased to 9. This produced an error as the layer were expecting batch sizes of 16 in the dlfeval() function.
I haven't found anything related to this problem on here, I was wondering if any of you would have a piece of advice or a solution for me.
Thank you for your help !

Réponses (1)

Shubham
Shubham le 27 Sep 2023
I understand that while training the neural network you found that minibatch size which was initially set to 16 was later decreased to 9.
Please check whether the dataset being used has the total number of samples divisible by 16. This would ensure that all samples are used and the minibatch size is not adjusting automatically to accommodate remaining samples. The minibatch size is also dependent on the available memory. Try looking for any inconsistencies in data preprocessing or the network architecture.
You may try using Mini-Batch datastore for reading data in batches.
Please refer to the following:
Hope this helps!!

Catégories

En savoir plus sur Image Data Workflows dans Help Center et File Exchange

Produits


Version

R2020b

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by