Optimising or removing For loop : Help required to optimise script to save memory

1 vue (au cours des 30 derniers jours)
Can anyone please suggest if there is a way to optimise the following for loop: I have a time vector which is incrementing every 1024 samples. Increment value is not always consistent, but is close to 0.25s (I need the real values) The time stamp for each sample should have really been updated, but it was never done. I need to now update the time stamps on each of my time samples making sure I use the real values to constrain the time-stamps.
The time matrix I am reading is HUGE, so any optimisation will save me runtime+memory. I only have a 4Gb RAM and a standard 3.1GHz PC.
My bulky script
t1=datevec(DASI.time(1));
time=[];
for i=1:length(A)
t2=datevec(DASI.time(A(i)+1));
du=etime(t2,t1);
dt=du/A(1);
s0=t1(1,6);
y0=t1(1,1);
m0=t1(1,2);
d0=t1(1,3);
h0=t1(1,4);
min0=t1(1,5);
temp_time=datenum(y0,m0,d0,h0,min0,(s0:dt:s0+du-dt))';
time=[time;temp_time];
t1=t2;
end
Hope someone can help me optimise this bit of code.
Thanks

Réponse acceptée

Jan
Jan le 15 Mai 2013
Modifié(e) : Jan le 15 Mai 2013
Letting an array grow in each iteration is an absolute DON'T for efficient programming. Pre-allocation allows to save runtime and memory consumption massively:
time = zeros(length(a), 1);;
for i=1:length(A)
...
time(i) = temp_time;
end
See http://en.wikipedia.org/wiki/Schlemiel_the_Painter%27s_algorithm. When a vector grows iteratively, the following happens:
1. iteration: memory for 1 double is allocated, value is assigned
2. iteration: memory for 2 doubles is allocated, the former value is copied,
the new value is assigned.
3. iteration: memory for 3 doubles is allocated, former values are copied,
the new value is assigned.
4. ...
You see, for e.g. a 1x1000 vector, the computer allocates sum(1:1000)*8 bytes and copies almost the same number of bytes (hint: a doubles needs 8 bytes): 250.5kB. For a 1x1e6 vector (no idea what you mean by "huge" exactly), this is 2.0 TeraByte already! Of course this memory is not allocated at once, such that you do not have to install this amount of RAM. But even allocating, copying and freeing the memory requires a lot of time.
The newest Matlab versions try to reduce the effect, most likely by allocating larger chunks if an iteratively growing is registered. But a clean pre-allocation is still the best solution.
[EDITED] Sorry, I've overseen that you do not append one element per iteration, but that vectors of different lengths are appended.
Ok, then you need a further step. Either run the loop once to determine the final length at first without storing the values. Then repeat it with a proper pre-allocation and storing the results. Inspite of nearly the double computations, this will still be much faster.
Or collect the values in a cell at first:
timeC = cell(length(a), 1);
for i=1:length(A)
...
timeC{i} = temp_time;
end
time = cat(1, timeC{:});
% Or much faster:
% time = Cell2Vec(timeC);
This uses http://www.mathworks.com/matlabcentral/fileexchange/28916-cell2vec. It seems like cat does not have an optimal pre-allocation also, while in Cell2Vec the required memory is calculated at first.
Finally Matlab's date&time functions are very smart and powerful, and in consequence slow. The contents of DASI.time is not clear. But I assume you can omit the DATEVEC, ETIME, DATENUM conversion, when you consider, that Matlab's numerical time format uses the the number of days as integer part and the number of seconds divides by 86400 as fractional part. E.g. instead of ETIME you can subtract the values and multiply by 86400 to get the difference in seconds.
  1 commentaire
Bedanta
Bedanta le 15 Mai 2013
thanks a lot Jan,
Did not realise how inefficient my previous attempt was until i tried your tip! Infinity times faster at getting my result.

Connectez-vous pour commenter.

Plus de réponses (0)

Catégories

En savoir plus sur Logical dans Help Center et File Exchange

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by