Running Matrix with large amounts of data
1 vue (au cours des 30 derniers jours)
Afficher commentaires plus anciens
suppose i have matrix Ui which has around 500,000 points and i have a code like this:
Ni=[];
Pi=[];
Ni=[100 100 100];
Pi=[100 100 100];
for i=1:length(Ui)
u=Ui(1,:);
[m,n]=size(Ni);
[p,q]=size(Pi);
for k=1:m
dmin1=min(sqrt((u(1,1)-Ni(k,1))^2+(u(1,2)-Ni(k,2))^2));
end
for l=1:p
dmin2=min(sqrt((u(1,1)-Pi(l,1))^2+(u(1,2)-Pi(l,2))^2));
end
[indx,d]=rangesearch(Ui(:,[1,2]),u(:,[1,2]),1.5);
Vector=cell2mat(indx);
for j=1:length(Vector)
LocalMax=max(Ui(Vector(j),3));
end
if LocalMax==u(3)
if dmin1>dmin2
Ni(i,:)=u;
end
else
if dmin1<=dmin2
Pi(i,:)=u;
end
Ui(1,:)=[];
if isempty(Ui)
break;
end
end
end
display(Pi);
When I tried it with Ui having 300 points it runs really fast however if I try it with 500,000 points it runs very slow it takes more than 3 hours. Is there a way to make the code run faster when it has 500,000 points?
0 commentaires
Réponse acceptée
Jan
le 5 Fév 2021
Modifié(e) : Jan
le 5 Fév 2021
Please mention the dimensions of Ui. "300 points" does not clarify both dimensions of a matrix. It helps to help you, if you post some working input data.
I do not see the purpose of:
for k=1:m
dmin1 = min(sqrt((u(1,1) - Ni(k,1))^2 + (u(1,2) - Ni(k,2))^2));
end
The argument of min() is a scalar. Then the result is in every case:
dmin1 = sqrt((u(1,1) - Ni(m,1))^2 + (u(1,2) - Ni(m,2))^2);
The same happens for dmin2 and LocalMax. Let me guess, that instead of the loops you want:
dmin1 = min(sqrt((u(1,1) - Ni(:,1)).^2 + (u(1,2) - Ni(:,2)).^2));
Letting an array shrink iteratively needs a lot of ressource: Ui(1,:)=[] . If you do this on a [500'000 x 3] array, this allocates 3 TB or RAM successively. Ni and Pi are growing iteratively, which suffers from the same problem. I cannot run your code and due to the useless for-loops I assume, it does not work correctly at all, but the speed can be improved by changing:
Ni=[100 100 100];
Pi=[100 100 100];
to
nUi = size(Ui, 1);
Ni = zeros(nUi, 3); % Pre-allocation with maximum size
Pi = zeros(nUi, 3);
Ni(1, :) = [100 100 100];
Pi(1, :) = [100 100 100];
and
u=Ui(1,:);
to
u=Ui(i,:);
This is a strange construction:
for i=1:length(Ui)
...
Ui(1,:)=[];
if isempty(Ui)
break;
end
end
The FOR loop gets its limits when it is entered the first time. So the shrinking Ui does not change the limit. Then this would be better:
current = 1;
while current <= nUi
u = Ui(currentm :);
...
% Replace Ui(1,:)=[]; by:
current = current + 1;
end
2 commentaires
Jan
le 18 Fév 2021
Do you know the profiler? See:
doc profile
It helps to identify the bottlenecks of the code.
The iterativ shrinking or growing of arrays must be avoided for efficient code, but there can be other constructs also, which decrease the speed.
500'000 does not sound like a large array. Use the profile to find the section, which takes the most time. Then post this piece of code here with some working inputs, e.g. created by RAND.
Plus de réponses (0)
Voir également
Catégories
En savoir plus sur Interpolation dans Help Center et File Exchange
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!