parallel.gpu.CUDAKernel memory types

1 vue (au cours des 30 derniers jours)
m4 Chrennikov
m4 Chrennikov le 27 Sep 2011
Hi folks.
I would like to know which types of CUDA memory I can use with subj matlab object. More specifically I wonder if user can use constant or texture memory. I'm newbie in CUDA programming but AFAIK using such memory types requires code outside kernel, but this code (cpu code) is discarded by compiler with -ptx flag. Correct me if I'm wrong or forward some useful links.

Réponses (2)

Edric Ellis
Edric Ellis le 28 Sep 2011
Currently it's not possible to use either texture or constant memory with CUDAKernel objects. The memory caches in recent NVIDIA GPUs mean that many of the advantages of using constant memory are no longer so important.
Do you have a particular example of what you'd like to do with texture/constant memory?
  2 commentaires
m4 Chrennikov
m4 Chrennikov le 1 Oct 2011
I want to store some precomputed data. What user should do in this situation? I should point out that I already plan to store some different data in shared memory (this data is also constant for all threads, but I can process it separately: by tiles).
For example each thread need to access two constant float arrays
1x14 - some input
and
14x300 - precomputed data
m4 Chrennikov
m4 Chrennikov le 6 Oct 2011
ping

Connectez-vous pour commenter.


m4 Chrennikov
m4 Chrennikov le 6 Déc 2011
Ping!
I've read about LDU in Fermi arch, but it isn't clear for me in which cases values from global memory are cached. Could you give some explanations on that using my comment above?
As I understood there is no way to use textures?

Catégories

En savoir plus sur GPU Computing dans Help Center et File Exchange

Tags

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by