Is there a way to make device memory persist between CUDA MEX calls

5 vues (au cours des 30 derniers jours)
Matt J
Matt J le 25 Déc 2013
Commenté : Matt J le 18 Juin 2014
I'm wondering if there is a graceful way to allocate data on or transfer data to the GPU in one MEX file (basically MATLAB interfaces to cudaMalloc or cudaMemcpy) and then process that data on the GPU with a different MEX file. I'm wondering if it is possible to do this without the Parallel Computing Toolbox.
When I do the memory transfer/allocation on the GPU, I will need to keep the pointer to the device memory and have it reside in the MATLAB workspace in some form until it is ready to be passed to the data processing MEX file. I'm wondering what the best way to do that is. Would I just convert the pointer value (not dereferenced) to a MATLAB integer and then convert it back again in the data processing MEX file when needed?

Réponse acceptée

Oliver Woodford
Oliver Woodford le 2 Jan 2014
Modifié(e) : Oliver Woodford le 2 Jan 2014
Yes, you can reinterpret_cast the pointer to an integer of a sufficient bit length, e.g. uint64, and return this to MATLAB. Then pass the integer to the new mex file, where it is reinterpreted as the pointer to the GPU memory again.
If you want to use this approach, and minimize chances of leaking memory, then wrap the memory allocation in a C++ class, and use the approach outlined in this FEX submission.
  4 commentaires
Oliver Woodford
Oliver Woodford le 18 Juin 2014
Modifié(e) : Oliver Woodford le 18 Juin 2014
Did this do the trick?
Matt J
Matt J le 18 Juin 2014
I never got a chance to test it, but I accept the answer anyway.
I'm starting to think that the cudaKernel class in the Optimization Toolbox really is the better investment, though. Its interface is a good replacement for a lot of the awkward things that you have to do outside the kernel itself in C/C++.

Connectez-vous pour commenter.

Plus de réponses (1)

Eric Sampson
Eric Sampson le 2 Jan 2014
Matt, see if the methods discussed in the following thread could be adapted for your use: http://www.mathworks.com/matlabcentral/newsreader/view_thread/278243
  1 commentaire
Matt J
Matt J le 2 Jan 2014
Thanks for the reference, Eric. I'll need to study it for a bit, but it looks like it could be applicable.

Connectez-vous pour commenter.

Catégories

En savoir plus sur GPU Computing dans Help Center et File Exchange

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by