Fitting my parametric function to experimental data from excel.
10 vues (au cours des 30 derniers jours)
Afficher commentaires plus anciens
I am trying to develop a correlation for pressure drop in pipelines based on some data that I have. The data is in a an excel file that I have imported as column vectors into MATLAB. Each vector is a variable that is needed to calculate the values of the "Pi" groups that I have developed. So I have liquid velocity, gas velocity, densities, viscosities etc.
The function that I want to fit to my data looks like this:
Y = pi1^a * pi2^b * pi3^c * pi4^d * pi5^e
where a,b,c,d and e values need to be determined from the fit.
and pi1 = liq vel/gas vel ... etc.
I have experimental values for Y.
How can I go about using the curve fitting options that MATLAB offers?
Any help will be much appreciated.
Thank you :)
0 commentaires
Réponses (2)
dpb
le 27 Juin 2017
Modifié(e) : dpb
le 27 Juin 2017
Use the custom fittype to define the equation and the fit with it..
pf_eqn = fittype('p1^a + p2^b + p3^c + p4^d+ p5^e', ...
'independent',{'p1','p2', 'p3', 'p4', 'p5'});
powfit=fit([p1 p2 p3 p4 p5],y(:),pf_eqn);
NB: fit expects x,y as columns and as single arguments in the list; hence the concatenation. (I think you could use the array p as the array and write
pf_eqn = fittype('p(1)^a + p(2)^b + p(3)^c + p(4)^d+ p(5)^e', ...
but haven't tried it to be certain. That could making using it a little simpler -- well, let's just try it:
>> g = fittype('p(1)^a +p(2)^+b','independent',{'p(1)' 'p(2)'},'dependent',{'y'},'coefficients',{'a','b'})
Error using fittype>iAssertValidVariableNames (line 1058)
The name 'p(1)' is not a valid MATLAB variable name.
...
Well, pooh! That's rude! The parser isn't smart enough to recognize anything except simple variables, no arrays allowed. Don't see any way around this with the fittype object; could use nonlinsq in lieu thereof; I expect the solution techniques are the same underneath the user interface trappings.
4 commentaires
John D'Errico
le 28 Juin 2017
Modifié(e) : John D'Errico
le 28 Juin 2017
Oh, don't do that! lol. Like me, it must be those blasted glasses that keep changing, while my eyes stay the same.
John D'Errico
le 27 Juin 2017
Modifié(e) : John D'Errico
le 28 Juin 2017
Note that these problems are pure hell to fit as nonlinear models. Nonlinear as hell. The search can go into bad places, creating complex results. The result will be hugely impacted by noisy data. High noise will drive things crazy. YOU NEED GOOD STARTING VALUES. YOU NEED GOOD STARTING VALUES. YOU NEED GOOD STARTING VALUES. If I say it three times, it must be true.
How do you get good starting values? The simple solution is to supply them, based on knowledge of the process. Hey, it is your model, your data!
Lacking sufficient knowledge of the process, there are still options. The best option is to linearize the model, then use linear regression to get the parameters. Thus if...
Y = pi1^a * pi2^b * pi3^c * pi4^d * pi5^e
then logging the model yields:
log(Y) = a*log(pi1) + b*log(pi2) + c*log(pi3) + d*log(pi4) + e*log(pi5)
this model is nice and linear in the unknown parameters. Estimate it as
abcde = log([pi1(:),pi2(:),pi3(:),pi4(:),pi5(:)])\log(Y(:));
abcde will be a vector of length 5. If good starting values exist, then it will have them.
If your data is noisy, then be careful. The linear regression shown can still generate crapola on nasty data. You may want to use a tool like robustfit then. But remember that logging the data screws around with the error structure. Without seeing your data, it is hard to know.
Finally, don't necessarily be happy with the results of the linear regression that I showed, just accepting them as truth. While that is a good way to model the process SOME of the time, it is potentially a dangerously poor solution, if your data varies by more than one order of magnitude. It might still be ok to give you decent starting values for a nonlinear fit, but there is no assurance that it will make you happy. So look carefully at your data. Look carefully at the results. Look at the residuals from prediction from the model. Do NOT just accept a measure like R^2 here. R^2 can be virtually meaningless on such a problem.
Voir également
Catégories
En savoir plus sur Nonlinear Regression dans Help Center et File Exchange
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!