How do I reduce the memory imprint due to a GPU array?
1 vue (au cours des 30 derniers jours)
Afficher commentaires plus anciens
Hello,
I am finding that creating a GPU array creates a huge spike in MATLAB's memory usage:
Opening matlab:
20309 mcoughli 20 0 4620m 172m 66m S 0.0 0.0 0:02.85 MATLAB
So approximately 4.6 GB. When I create a gpuArray from the command line:
>> gpuArray(1);
It spikes dramatically:
20309 mcoughli 20 0 537g 605m 255m S 0.0 0.1 2:07.06 MATLAB to approximately 537GB.
Does anyone understand why this is happening / can be prevented? It creates problems when I attempt to run on smaller computing nodes. Running ulimit -v beforehand works to some extent, but it is more difficult to set when running parallel process.
Thank you,
Michael
0 commentaires
Réponse acceptée
Edric Ellis
le 3 Déc 2013
This is unfortunately likely to be due to loading all the GPU support libraries. These are quite large, and all get loaded when you first create a gpuArray. I'm afraid there's no workaround for this.
2 commentaires
Joss Knight
le 9 Déc 2013
Have a look at installdir / bin / arch and list the contents by size - you'll see some obvious GPU libraries near the top e.g. npp, cublas, and cufft. To get good performance GPU runtime code is very non-general, but this means there are multiple implementations for every use case - add to that the overhead for supporting multiple compute architectures.
Plus de réponses (0)
Voir également
Catégories
En savoir plus sur GPU Computing dans Help Center et File Exchange
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!