How to convert this vectorized code into GPU code for MAXIMUM speedup ?

Réponses (2)

I was able to get a marginal speedup with additional vectorization of the mask:
x = sum(I < cat(3, 120, 155, 160), 3) == true;
but otherwise you've done pretty well. You've got to wonder why you need to replicate the output on every channel however. Why not discard the colour channels if you're using grayscale?

2 commentaires

Tanmay Virnodkar
Tanmay Virnodkar le 20 Avr 2017
Modifié(e) : Tanmay Virnodkar le 20 Avr 2017
Actually i want to show speedup or lets say difference between normal cpu time and gpu time..... so if i first convert the picture into grayscale and use imtool then cpu time is also very less so i am not able to show speed up ...hence i decided not to discard R G B channels.
Right, but then you're including the cost of replicating data in GPU memory and doing indexing, which is memory-bound and doesn't necessarily show the GPU in a great light.

Connectez-vous pour commenter.

Jan
Jan le 18 Avr 2017
The bottlenecks of the code are the darn clear all and the disk access using imwrite. Moving this to the GPU will not help.

Catégories

Commenté :

Jan
le 26 Avr 2017

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by