Timing concerning certain functions, in this case a convolution.

3 vues (au cours des 30 derniers jours)
Logan Betts
Logan Betts le 20 Avr 2019
Modifié(e) : Image Analyst le 20 Avr 2019
I'm having a little trouble with my program at the moment. My professor has asked us to create our own convolution function. What iv'e written works, but it runs unbelieveably slowly. I'm unaware of the effeciency of doing a convolution this way. I've compared the time the built in conv() takes to my own and its horrid. Any pointers could help me find a better alternative if there is any.
for i = 1:N% this loop will happen N times
sum = 0;%place holder for our matrix sum
for j = 1:i%were going to be adding the multiplications i times
multi = (A(1,N -(i-1)+(j-1))*B(1,j));%
sum = sum + multi;%this will sum every time we multiply
end
final(1,i) = sum;
end
for i = 1:N
sum = 0;
for j = 1:(N-(i-1))
multi = (A(1, N-(j-1)-(i-1))*B(N-(j-1)));
sum = sum + multi;
end
final(1,N+i-1) = sum;
end
The first nested for loop does the convolution untill Nth place, then the second finishes untill the matrices are exausted. A & B are similar sized matrices, N is the size of the matrices, sum is a placeholder, and final holds the final convolution. I'm not sure what part of this is causing it to run so slow. As I said, it gives the corerct answer every time, it just doesnt run with any speed.
Thanks you for your time.

Réponse acceptée

Image Analyst
Image Analyst le 20 Avr 2019
Modifié(e) : Image Analyst le 20 Avr 2019
See attached manual convolution.
Yes, it will be very slow. This is because you need to access every single element in the entire window every time the window slides over. Optimized methods realize that, for large windows, the middle (N-2) columns don't change. Only the first column and the last column change. So when you're summing up the products of the matrix and the kernel, you only need to add in the weighted sum of the new column (that just entered the window) and the weighted sum of the last column (which just left the window).
Another method checks to see if the kernels are separable, like an x function times a y function. If it is, you can use the method of separable kernels to just do two 2-D convolutions with 1-D kernels, instead of one 2-D convolution with a 2-D kernel. And that saves a lot of time.
So there are a number of steps you can take to optimize it, each one speeding it up, until you arrive at a highly optimized algorithm that's about as fast as it can be, like I'm sure is already built into MATLAB.

Plus de réponses (0)

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by