How to optimize gpuArray operation to minimize GPU memory
Afficher commentaires plus anciens
Dear All,
I have to do A-B, where A, B are gpuArrays of 20000X20000 elements (single precision). Each of A and B is ~ 1.6 Gb and I have a 4Gb card. A and B have each to be created en block, although not at the same time. My code generates A, and B, but I run out of memory when I try A-B. Arrayfun does not seem to help A, B);. Is there a workaround? I thought of generating A, splitting it in 2 or 4 (A1-4), deleting A, creating B, splitting B (B1-4), deleting B, calculating A1-B1, A2-B2 etc, deleting A1-4, B1-4, concatenating C1-4 into C, deleting C1-4.
Is there a smarter coding way to minimize memory requirements (I am sure it is)? Thank you
Octavian
Réponses (1)
Edric Ellis
le 4 Déc 2014
0 votes
Your card cannot possibly fit 3 1.6Gb matrices in memory, which is needed to perform C = A - B. Therefore, you need to make your problem smaller to fit it on your device somehow - this should be possible (if somewhat inconvenient) if you're accessing A and B in purely element-wise ways.
2 commentaires
Octavian
le 4 Déc 2014
Edric Ellis
le 4 Déc 2014
That depends on how you're creating A and B - you need to create/read only part of them at a time, and the operate on them.
Catégories
En savoir plus sur GPU Computing dans Centre d'aide et File Exchange
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!