Grouping time stamp data into intervals

10 vues (au cours des 30 derniers jours)
Systematically Neural
Systematically Neural le 16 Oct 2018
Modifié(e) : dpb le 18 Oct 2018
I have a set of time stamps with a sampling frequency of 1000 (samples every .001). I want to take any time stamps that are continuous for more than 10 seconds (10,000 data points) to be taken as an interval. Is this possible? I attached a example data of time stamps.
  8 commentaires
Systematically Neural
Systematically Neural le 17 Oct 2018
Modifié(e) : Systematically Neural le 17 Oct 2018
Sorry, I did not add my examples, but you are very very correct I was attempting to use diff and find. I think the FP not being taken as precisely as I was hoping was the issue I was running into and am still having issues, even though the below answer is doing a better job then my attemps. DBP, do you think the TMW toolbox would treat FP more precisely since it is treating data points as time points?
dpb
dpb le 17 Oct 2018
Modifié(e) : dpb le 18 Oct 2018
I don't have any idea what you mean by "the TMW toolbox" but FP precision and rounding is inherent in FP storage by definition.
There are ways to minimize the magnitude of it, one of the prime examples of difference possible in seemingly innocuous calculations are with data such as you show where one can do something like
dt=0.001;
t=[0:dt:10]; % generate a time vector for 10 sec @ 1 kHz
as compared to
t=linspace(0,10,1000*10+1); %
Actual numerics by example--
dt=0.001;
t1=[0:dt:10];
t2=linspace(0,10,1000*10+1);
t3(1)=0;for i=2:10001,t3(i)=t3(i-1)+dt;end
t4(1)=0;for i=2:10001,t4(i)=(i-1)*dt;end
Let's compare...
NNZ(method1==method2)
colon linspace summation product
colon 8658 19 9032
linspace 17 8663
summation 18
The magnitude of difference at the end in units of FPP precision at the value. All actually round correctly at the end except for the simplistic addition that compounds the rounding of the value of dt every step. The differences are in the intermediary values of just how the error is split excepting for the summation that compounds the error.
>> [[t1(end);t2(end);t3(end);t4(end)]-10]/eps(10)
ans =
0
0
-58
0
>>
What the correlation-like table shows is that colon is most like summation excepting it "fixes up" the end point while linspace is more like but not exactly the same as the product
What the last shows is that one may need a fairly large tolerance by the end of the series if one computes the timestamps in one fashion but does the comparison using another technique to calculate time values.
The other issue is the precision of the data file as to whether full precision values are stored/read back or there's rounding there in a text format that can cause issues.

Connectez-vous pour commenter.

Réponse acceptée

jonas
jonas le 16 Oct 2018
Modifié(e) : jonas le 16 Oct 2018
I'm a little bit confused as of what you want to do with those segments, but this code should find them for you. I'm not entirely sure it works, but the plot looks promising. As, dsb already pointed out, it would be a good idea to provide a smaller example.
t = times_1;
dt = [0 diff(t)];
% Group segment having a time diff less than 0.001 (+tol)
mask=dt < 0.0011;
d(mask) = 0;
d(~mask) = 1;
d = cumsum(d);
% find trailing integers
[counts,val] = histcounts(d,'binwidth',1)
% find groups longer than 10 s
v = val((counts/10000) > 10);
sv = zeros(size(d));
for j = 1:length(v)
sv = sv + (d == v(j));
end
sv = logical(sv);
sv(~mask)=0;
% plot
plot(sv,'b','linewidth',2);hold on
plot(d)

Plus de réponses (0)

Catégories

En savoir plus sur MATLAB dans Help Center et File Exchange

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by