GPU Out of memory on device.

63 vues (au cours des 30 derniers jours)
caesar
caesar le 16 Mar 2018
I am using the neural network toolbox for deep learning and I have this chronical problem when I am doing a classification. My DNN model has trained already and I keep receiving the same error during classification despite the fact that I used an HPC (cluster) that has Nvidia GeForce 1080, and my machine that has GeForce 1080Ti. the error is :
Error using nnet.internal.cnngpu.convolveForward2D Out of memory on device. To view more detail about available memory on the GPU, use 'gpuDevice()'. If the problem persists, reset the GPU by calling 'gpuDevice(1)'.
Error in nnet.internal.cnn.layer.util.Convolution2DGPUStrategy/forward (line 14)
Error in nnet.internal.cnn.layer.Convolution2D/doForward (line 332)
Error in nnet.internal.cnn.layer.Convolution2D/forwardNormal (line 278)
Error in nnet.internal.cnn.layer.Convolution2D/predict (line 124)
Error in nnet.internal.cnn.DAGNetwork/forwardPropagationWithPredict (line 236)
Error in nnet.internal.cnn.DAGNetwork/predict (line 317)
Error in DAGNetwork/predict (line 426)
Error in DAGNetwork/classify (line 490)
Error in Guisti_test_script (line 56)
parallel:gpu:array:OOM
Has anyone faced the same problem before?
ps: my test data contains 15000 images.
  1 commentaire
Thyagharajan K K
Thyagharajan K K le 28 Nov 2021
I had a similar problem. The main reason is due to the large number of learnable parameters. You can reduce the number of nodes in the fully connected network or you can reduce the size of the layer available just before the fully connected layer by incresing the stride value or you can reduce both.

Connectez-vous pour commenter.

Réponse acceptée

Joss Knight
Joss Knight le 17 Mar 2018
Reduce the 'MiniBatchSize' option to classify.
  2 commentaires
caesar
caesar le 17 Mar 2018
well, the model I am trying to use has been already trained so how can I reduce the miniBatchsize? should I retrain the model on reduced MiniBatchSize in order to be able to do classification?

Connectez-vous pour commenter.

Plus de réponses (3)

Khalid Labib
Khalid Labib le 19 Fév 2020
Modifié(e) : Khalid Labib le 13 Mai 2020
In "Single Image Super-Resolution Using Deep Learning" MatLab demonstration:
I tried clear my gpu memory ( gpuDevice(1) ) after each iteration and changed MiniBatchSize to 1 in "superResolutionMetrics" helper function, as shown in the following line, but they did not work (error: gpu out of memory):
residualImage =activations(net, Iy, 41, 'MiniBatchSize', 1);
1) To solve this problem you might use CPU instead:
residualImage =activations(net, Iy, 41, 'ExecutionEnvironment', 'cpu');
I think this problem is caused by the high resolution of the test images, e.g. the second image "car2.jpg", which is 3504 x 2336.
2) A better solution is to use GPU for low resolution images, and CPU for high resoultion images by replacing "residualImage =activations(net, Iy, 41)" with:
sx=size(I);
if sx(1)>1000 || sx(2)>1000 %try lower values if it does not work e.g: if sx(1)>500 || sx(2)>500
residualImage =activations(net, Iy, 41, 'ExecutionEnvironment', 'cpu');
else
residualImage =activations(net, Iy, 41);
end
3) The most efficient solution is to divide the image into smaller images (non-overlapping blocks or tiles), such that each small image has a size of 1024 or less in any of its dimension based on your GPU. So, you can use GPU for each of these small images without errors.
Then, apply your CNN on these small images using GPU. After that, you can combine the small images to form the size of original image.
  1 commentaire
Rui Ma
Rui Ma le 22 Avr 2020
Modifié(e) : Rui Ma le 22 Avr 2020
Thanks! It works! Although a little bit slow

Connectez-vous pour commenter.


marie chevalier
marie chevalier le 4 Juin 2019
Modifié(e) : marie chevalier le 4 Juin 2019
Hi,
I have a similar issue here, and the link given by Joss doesn't really help me to understand how to fix it.
I am working on the "Single Image Super-Resolution Using Deep Learning" MatLab demonstration.
I would like to use the pretrained network on my own images.
I get a similar error message when arriving at the line:
Iresidual = activations(net,Iy_bicubic,41);
I tried using the command line gpuDevice(1) and it didn't do anything.
I also tried changing the MiniBatchSize to 32 instead of the default 128 and got the same error.
Does anyone understand how to fix this problem?
  3 commentaires
marie chevalier
marie chevalier le 26 Juin 2019
It still doesn't work. I'm afraid this is due to something else.
I'm out of ideas at the moment, I did a little cleanup around my computer just to be safe but it didn't change much.
I'll try re-downloading the example again, maybe I changed something in it without noticing.
Akash Tadwai
Akash Tadwai le 17 Déc 2019
@Joss Knight, It still doesn't work in my case. I was training alex net with a mini batch size of 1 but still MATLAB is giving the same error.
Alexnet

Connectez-vous pour commenter.


Alvaro Lopez Anaya
Alvaro Lopez Anaya le 7 Nov 2019
In my case, I had similar problems, despite of the fact that I have a gtx1080Ti.
As Joss said, reducing the MiniBatchSize solved my problem. It's all about the training options.

Catégories

En savoir plus sur Parallel and Cloud dans Help Center et File Exchange

Produits

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by