Large training set in Semantic Segmentation runs out of memory in trainNetwork

1 vue (au cours des 30 derniers jours)
Lorant Szabo
Lorant Szabo le 18 Nov 2019
Dear Community!
Since the following question has not been answered yet, i would give an update with more details.
https://www.mathworks.com/matlabcentral/answers/413264-large-training-set-in-semantic-segmentation
I would like to train a dataset contains 600 piece of 1208x1920 images and 50 classes.
I used the following code just changed the classes and the paths:
However the training runs on the following error:
matlab error.png
Where the 1208x1920 is the image size, 50 is a number of classes and 200 is the number of the validation pictures.
With 1 piece of validation picture the training is starting
Memory: 64 GB
GPU : Titan X Pascal 12GB
We would like to know what is the best way to overcome this problem.

Réponses (1)

Raunak Gupta
Raunak Gupta le 22 Nov 2019
Hi,
As mentioned in the example that is referenced you may need to resize the image to a smaller size that can fit into the GPU memory or you may try reducing the MiniBatchSize to a smaller value like 4 or 2. If even 1 image doesn’t fit into the memory you need to resize the image of choose a smaller network to work with or increase the GPU memory on the system. Here since the imageDatastore is used, all the validation images won’t be read from the memory instead only MiniBatchSize’ number of images will be read.
Here since the image size is almost 3.4 times the size used in example, so I recommend first changing the MiniBatchSize to 2 as compared to 8 in the example.
For increasing the MiniBatchSize the best way is to increase the GPU Memory present on the system.
  4 commentaires
Brian Derstine
Brian Derstine le 8 Déc 2020
Modifié(e) : Brian Derstine le 8 Déc 2020
What do you mean by this: "unless the validation data is loaded specifically into the code,"?
Is there a particular code pattern that will cause the entire validation set to be loaded into memory?
reduce your validation dataset size and it should work.
Brian Derstine
Brian Derstine le 5 Nov 2021
also, an update in 2021b may fix this bug: "You are correct. In MATLAB R2021a, there is a bug in the Neural Network Toolbox where, depending on the workflow, if the validation data is large, you may run out of memory on the GPU. This has been reported in image segmentation and LSTM workflows.
The workaround is to reduce the validation data set size or train without validation data. Reducing the "miniBatchSize" does not fix this issue.
A patch for this bug was made in MATLAB R2021b. You may want to consider using this version of MATLAB to avoid encountering this issue." (response from matlab support)

Connectez-vous pour commenter.

Catégories

En savoir plus sur Image Data Workflows dans Help Center et File Exchange

Produits


Version

R2018b

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by