Using the GPU through the parallel comp. toolbox to optimize matrix inversion
15 vues (au cours des 30 derniers jours)
Afficher commentaires plus anciens
Hello,
In my code I impliment:
A=B\c;
B is an invertable matrix (14400x14400), c is a vector.
Since B is large this takes very long. I am using a windows 7, 64bit, 4 GB RAM, 2 cores.
Would I be able to shorten the time by using the GPU through the parallel comp. toolbox?
Would I need to write my own version of parallel matrix inversion, or has this problem been faced before and there is a posted solution?
Thank you.
0 commentaires
Réponse acceptée
Jill Reese
le 2 Oct 2012
As of MATLAB R2010b this functionality has been available on the GPU in the Parallel Computing Toolbox. In order to use it, at least one of your variables B and c must be transferred to the GPU before calling mldivide (\).
In the latest release of MATLAB (R2012b) you can use this code to solve your problem:
B=gpuArray(B); % transfer B to the GPU
c=gpuArray(c); % transfer c to the GPU
A = B \ c; % Solve the linear system on the GPU and store A on the GPU
% This line of your existing code doesn't change at all
% you can continue to perform work on the GPU using A, B, and c or
A=gather(A); % transfer A back to the MATLAB workspace
0 commentaires
Voir également
Catégories
En savoir plus sur GPU Computing dans Help Center et File Exchange
Produits
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!