Can this GPU code snippet be redone without nested loops?
Afficher commentaires plus anciens
Hello, I have two matrices: matrix1 is a logical array of 1s and 0s (1000 x 800) matrix2 is a different logical array (2000 x 800)
I am essentially taking the first row of matrix 1 and calculating the row summation of common elements / total number of elements. Both of these arrays are gpuArrays. What I finding out:
for j=gpuArray.colon(1,x)
for k=gpuArray.colon(1,y)
output(j,k)=sum(matrix1(j,:) & matrix2(k,:)) / sum(matrix1(j,:) | matrix2(k,:))
end
end
Runs very fast for small values of x and y, but once x,y is large is takes exponentially longer to run on the GPU
I am investigating the use of repmat here but I am not sure how to implement. Any ideas here? Or if there is another option for to get rid of the nested for loops?
Thanks
Réponse acceptée
Plus de réponses (1)
Sean de Wolski
le 11 Nov 2013
Modifié(e) : Sean de Wolski
le 11 Nov 2013
Is output preallocated?
Before the loops:
output = gpuArray.zeros(x,y);
This should speed it up dramatically.
3 commentaires
Amr Ragab
le 11 Nov 2013
Sean de Wolski
le 11 Nov 2013
Modifié(e) : Sean de Wolski
le 11 Nov 2013
Do matrix1 and matrix2 already live on the gpu, i.e. are they gpuArrays?
Amr Ragab
le 11 Nov 2013
Catégories
En savoir plus sur Parallel Computing dans Centre d'aide et File Exchange
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!