Appropriate Weights in Exponential Fit

Hey Everyone,
For each , I have datapoints. is 1-sigma error of each respective point. Imagine arrays x contains all , y contains all and E contains all .
Now varies with in an exponential (decay) manner. My confusion relies on defining the "weight" parameter in exponential fit. I read online that for each datapoint can be defined (called inverse variance weighting). Is this the correct approach?
I am confused because in my case (due to statistical nature of decay) is generally decreasing as is increasing; which essentially means giving more weights to the rigthmost points which have large relative (/) error. My first thoughts were that the starting data points deserve more weight; as they have less relative error (/).
Just confused with how to implement my data right in MATLAB using the weight parameter to fit x vs y with errors in E.
Thanks,

Réponses (1)

J. Alex Lee
J. Alex Lee le 25 Sep 2020

0 votes

The choice of weighting is unrelated to the computing platform...
If you want to use an inverse variance weighting strategy, to your point, you could scale the errors by their means, then
But practically, have you just run your fitting with different weightings to see if you get results that differ by amounts that you care about?

4 commentaires

Manish Sharma
Manish Sharma le 25 Sep 2020
Modifié(e) : Manish Sharma le 25 Sep 2020
Yes, the results differ very much to care in my case. My confusion is this:
The manual says: "if you only have estimates of the error variable for each data point, it usually suffices to use those estimates in place of the true variance." So basically defining ?
Which is different than yours and what I was using earlier ?
Kind of confused what to use at weights.
Here is the reference for the above sentence in qutotation: https://www.mathworks.com/help/curvefit/least-squares-fitting.html
J. Alex Lee
J. Alex Lee le 25 Sep 2020
well, ultimately you're just giving a list of numbers, 1 for each data point, that weights the importance of that data point, so you can really design it any way you want if you have a good idea of how you want to weight.
i thought you were saying that in an exponential decay, your y values near zero have less variance because their means were lower, in which case you'd be weighting those with more importance. you could get around that by using variance divided by mean (squared). i guess thinking about it more now, you'd hope that that would just push your weights to be more equal -> i would just try not weighting the data and see if you think the fit is good enough.
do you have example data to share?
Manish Sharma
Manish Sharma le 3 Oct 2020
Hi J. Alex,
Attached is a simplistic example. In my case instead of 20, I have several points. Such that changing from 1 to any value changes my final value significantly. The is error in y.
If you notice the error is higher for initial points. But initial points are more trustworthy as the source is stronger initially; hence better statistics. As time passes, the source weakens (exponentially) so the error also decreases.
So, if I use ; I actually give less weights to the initial points which I think may be wrong. Any take?
Any suggestions, why not or anything else for example?
J. Alex Lee
J. Alex Lee le 3 Oct 2020
As for what is reasonable, looking at (dy/y) gives consistently about 0.015 and doesn't change much, so weighting by (y/dy) (or any of its powers) would roughly be like not weighting at all. Even weighting by 1/dy, your errors are so small and change by so little compared to the y value themselves, that it really shouldn't matter.
Your problem (why it actually does matter in practice) probably has to do with the model you are trying to fit. Are you fitting
or
or something else?
Based on looking at the data and playing around with fits, I assume you are using a model with an offset ( c ), and you are concernt that the offset value is sensitive to your weighting strategy.
But look at your data, it doesn't actually taper much to suggest that you should have any offset. So of course you can get almost arbitrary values of offset that still reasonably describe your data well, but it looks like you'll get visually indistinguishable fits over the domain of the data (from 0<x<20) no matter how you weight.

Connectez-vous pour commenter.

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by