faster lean bilinear imresize / improved gpuArray/imresize
1 vue (au cours des 30 derniers jours)
Afficher commentaires plus anciens
Hi,
I'm currently processing lots of images in a convolutional neural network, and for that the resize.m function is currently the major bottleneck (Googling this resulted in a few other people complaining about imresize as well). However, digging a bit into the code there is about 60% overhead for this function by checking arguments and performing extra function calls. So I made a leaner version, but this requires access to the private imresizemex function. Is it possible in a future release to, for example:
- create a lean imresize_bilinear function (as attached here)?
- move imresizemex outside of the private directory so it can be accessed? (I work on several different servers, often with a different matlab version, so copying imresizemex does not work too well for me).
Related:
- Are you working on making gpuArray/imresize work in format: out = imresize(im, [numRows numCols])?
Example code is attached. Output:
>> testImresize
original bilinear resize: 2.364755
lean bilinear resize: 0.922745
Thanks, Jasper
0 commentaires
Réponses (0)
Voir également
Catégories
En savoir plus sur Introduction to Installation and Licensing dans Help Center et File Exchange
Produits
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!