A follow-up: I rewrote my first, straigthtforward approach that took ages to compute:
function groups = f2(trig, data)
% approach with for-loop
len = length(trig);
groups = {};
oldTrigger = 0;
ngrp = 0;
for n = 1:len
% get trigger value
newTrigger = trig(n);
% rising edge
if (1 == newTrigger) && (0 == oldTrigger)
% increment group index
ngrp = ngrp + 1;
% initialize new group
curGroup = [];
% reset sample index
i = 1;
end
% trigger high
if (1 == newTrigger)
% add sample to group
curGroup(i,1) = data(n); %#ok<AGROW>
% increment sample index
i = i + 1;
end
%falling edge or last sample
if (0 == newTrigger) && (1 == oldTrigger) || (n == len)
% add group to output
groups{ngrp,1} = curGroup; %#ok<AGROW>
end
% remember trigger value
oldTrigger = newTrigger;
end
end
I suspected the two array growing statements to be the culprit of the bad performance. Well, I then used a short benchmark:
A = rand(1e6,1);
B = rand(1e6,1) > 0.5;
t0 = tic;
g = f(B,A);
t1 = toc(t0);
disp(numel(g));
fprintf('f took %gs\n', t1);
t0 = tic;
g = f2(B,A);
t1 = toc(t0);
disp(numel(g));
fprintf('f2 took %gs\n', t1);
Which yielded the following output:
249706
f took 5.17879s
249707
f2 took 1.1579s
Which surprised me. I must have done something differently in my original approach... Also, thanks to the simple comparison of the number of output elements, I found that f does not include the first group, if trigger starts high (i have not noticed that, as in my application, trigger data always start with 0).
Well, now I'm even more curious if you have Ideas for improvement!
Cheers
Manuel