GPU Array lazy evaluation?
Afficher commentaires plus anciens
I have found this behavior of the MATLAB gpuArray class by chance, as a consequence of several unexpected out of memory error on some code which is of none importance here. Can anybody explain this? An array, say 40000*5000 in single precision should take 0.8 GB. If I run the following operations in sequence
H=zeros([40000*5000 1],'single','gpuArray');
H=H+randn([40000*5000 1],'single','gpuArray');
H=H+randn([40000*5000 1],'single','gpuArray');
H=zeros([40000*5000 1],'single','gpuArray');
the GPU RAM shows 3297MB occupied, of which 200 are taken by default by MATLAB when the GPU is selected. However, if I clear the variable H the memory is freed totally, which means that is not a leak. It seems that MATLAB for certain operations reserves some memory or even duplicates the target matrix. Furthermore, if I check the available memory (not the free memory) it shows the expected matrix size being taken. However, the "locked" memory is unavailable, because I have been monitoring my computations and the loop just crashes when the max mem size is reached.
Can anybody, please, explain this behavior and if a workaround exists, still using the gpuArray?
I am running a NVIDIA GTX 1080ti with XEON E5-2860 v2 on ubuntu 16.04, with MATLAB2018a
1 commentaire
Réponses (1)
Joss Knight
le 19 Mai 2018
MATLAB doesn't take 200MB on device selection, the CUDA driver does.
MATLAB pools up to a quarter of the GPU memory by default. This considerably improves performance by reducing the number of synchronous allocations.
You can turn off pooling using
feature('GpuAllocPoolSizeKb', 0)
Catégories
En savoir plus sur GPU Computing dans Centre d'aide et File Exchange
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!