Effacer les filtres
Effacer les filtres

Parameter covariance from lsqnonlin when using Jacobian Multiply function?

23 vues (au cours des 30 derniers jours)
Martin Ryba
Martin Ryba le 28 Août 2022
Commenté : Bruno Luong le 8 Nov 2023
Hi, I'm trying to estimate the uncertainty in my model parameters for a large NLSQ problem (~2M measurements, ~10k unknowns), so I'm constantly struggling to manage RAM even with 128GB on my machine. I have the algorithm working reasonably well using the Jacobian Multiply function interface, but now when it returns I'd like to estimate the parameter covariance in order to assess the fit and the uncertainties. Normally, if one does:
[xCurrent,Resnorm,FVAL,EXITFLAG,OUTPUT,LAMBDA,JACOB] = lsqnonlin(problem);
xCov = inv(JACOB.'*JACOB)*Resnorm/(numel(FVAL)-numel(xCurrent));
You can get the covariance, or you can just send the Jacobian into nlparci() and get confidence intervals. However, if I try that, I get "out of memory" errors, which is why I'm using the JM functions in the first place. I tried sending in eye(numel(xCurrent)) with flag zero to get the JM function to compute the inner product of the Jacobian, but same out of memory error. Is there a loop method I should use to build up the rows/columns of J.'*J? Do I try using a sparse identity matrix? I notice the JACOB returned is a sparse matrix class, but I'm not sure how sparse my native Jacobian really is (some cases moderately sparse, other times it can be pretty full rank). Is there a lower-footprint means of building up the covariance?
  9 commentaires
Guillermo Soriano
Guillermo Soriano le 8 Nov 2023
Bruno Luong: hi.. thank you so much. I tryed with lscov and works very nice.. also I tryed with K = J'/R and also works fine.
Find atached part of my algorithm that process the non lineal least square for target parameters and the use of Newton Raphson to find the root. I need to reduce so many lines of code.. could be possible to use function lsnlin ? I was trying to use but without results due mainly the covariance.
Once more thnak you so much
Bruno Luong
Bruno Luong le 8 Nov 2023
I don't kwow lsnlin, but lsqnonlin.
As the method for x3 suggested, doing the model to fit data y with with covariance R
f(x) = y
is equvlalent to do the stadard least square on
U'\f(x) = U'\y
where U is the cholesky (or square root) of R.
In other word, use lsqnonlin to solve g(x) = z with
g(x) := U'\f(x)
z := U'\y?

Connectez-vous pour commenter.

Réponses (3)

Torsten
Torsten le 28 Août 2022
Modifié(e) : Torsten le 28 Août 2022
Can you form the (sparse) matrix
B := [JACOB -speye(size(JACOB,1)) ; zeros(size(JACOB,2)) JACOB.']
?
Then you could try to solve
JAC.'*y = e_i
JAC*x = y_i
with e_i as i_th column of eye(numel(xCurrent)) using an iterative method.
If you concatenate the y_i to a matrix, you should get the inverse of JACOB.'*JACOB.
  1 commentaire
Martin Ryba
Martin Ryba le 31 Août 2022
No, turns out when you use the JM functions, the JACOB returned by lsqnonlin is useless, it's the placeholder sparse matrix you create in the primary fit function. See my update above regarding constructing the real J.

Connectez-vous pour commenter.


Matt J
Matt J le 28 Août 2022
Modifié(e) : Matt J le 28 Août 2022
Some people will compute just the diagonal D of J.'*J, approximating it as a diagonal matrix. This calculation is simply,
D=sum(J.^2)
or
D=(J.^2).'*ones(N,1)
Since you have a routine to compute J.'*x, maybe it would be similarly easy for you to calculate (J.^2).'*x;
  6 commentaires
Bruno Luong
Bruno Luong le 28 Août 2022
The point is what you claim inv(D) ~ inv(H) has no foundation whatsoever.
Matt J
Matt J le 28 Août 2022
Modifié(e) : Matt J le 28 Août 2022
It is well-founded if D=H and approximately founded if .

Connectez-vous pour commenter.


Bruno Luong
Bruno Luong le 28 Août 2022
Modifié(e) : Bruno Luong le 28 Août 2022
A low rank approximation of
H=inv(J'*J);
is using svds
r = 10;
[~,S,V]=svds(J,r,'smallest');
r = size(S,1); % == 10
s = diag(S);
G = V*spdiags(1./s.^2,0,r,r)*V';
In order to have G ~ H, you need to set r so that the partial sum of (1/s.^2), where s is the singular values of J, is close to total sum. So depending on the spectrum of your Jacobian it can be hard or not to well approximate with low value of r.
But probably the singular vector V(:,j) give plenty of information about the sensitivity of the parameters.

Produits


Version

R2022a

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by