Rules of thumb on GPU usage?
3 vues (au cours des 30 derniers jours)
Afficher commentaires plus anciens
I've converted several algorithms to using a gpu and I've always seen a tremendous improvement in execution time. For the first time, this particular trick has failed me. I'm seeing execution time lengthen when I use the GPU.
Is there a better way to determine the performance of a code snippet in a gpu than to alter the code and try it out?
Specifically when the target code may be executed on different classes of gpu's are there rules of thumb to predict the improvement/degradation that will result?
0 commentaires
Réponses (1)
Joss Knight
le 12 Oct 2015
You ought to provide some examples so that we know the kind of thing you're getting at.
The main rule of thumb is that the GPU will generally perform well when your code is highly data-parallel. If you get a speed-up from vectorizing your code, you'll probably get a speed-up on the GPU. This means the same sort of operations are taking place in multiple places on a large dataset. If however, you have small pieces of data, a lot of disparate tasks, dependent operations, and loops, you probably don't have something that will parallelize well.
2 commentaires
Joss Knight
le 14 Oct 2015
Modifié(e) : Joss Knight
le 14 Oct 2015
gpuArray supports logical indexing so I see no reason why you would need any data transfers (see the blog article I linked above for examples). Can you explain?
Voir également
Catégories
En savoir plus sur GPU Computing dans Help Center et File Exchange
Produits
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!