lsqnonlin optimization: large condition number of Jacobian matrix at all iterations, but full rank

20 vues (au cours des 30 derniers jours)
I use lsqnonlin to solve a non-linear data-fitting problem (fitting parameters of a partial differential equation). The minimization problem is
f = ||g^sim(params) - g^exp ||^2
Currently, I optimize 18 parameters and the vectors g^sim and g^exp have 1470 entries, which is a collection of vectors with 147 entries at 10 different times.
The exact parameters (re-identification) are given by
Here is my code:
opts = optimoptions('lsqnonlin', ...
'StepTolerance', 1e-9, ...
'FunctionTolerance', 1e-9, ...
'OptimalityTolerance', 1e-9, ...
'MaxIterations', 250,...
'SpecifyObjectiveGradient', true, ...
'CheckGradients', false);
sol = lsqnonlin(@(params)objFun(params, g_exp), E0, lb, ub, opts);
function [f,J] = objFun(params, g_exp)
g_sim = ...; %call pde solver
J = ...; &call pde solver
f = g_sim - g_exp;
%scaling of f and J is explained later...
Running lsqnonlin with a given start vector, lsqnonlin returned exitflag=3 after 25 iterations and 26 function calls; The sum of squares is 4.3470e-16 and the firstorderopt 3.2449e-08. Also, the reference parameters from above are perfectly re-identified up to the 6th digit after the decimal point indicating the perfect fit.
However, what I observed is that the Jacobian matrix
J = d g^sim(params) / d params
at all 25 iterations has condition numbers
cond(J) \approx 1e11
rank(J) = 18
This is unconvient since I would like to compute some quality measures like the correlation matrix, which does not really make sense for such high condition numbers (the product J'*J has, consequently, even greater condition numbers).
Also, the optimization fails for some other start vectors which indicates that there might be a problem with my Jacobian or the parameters.
Based on my knowledge, such high condition numbers can be traced back to a bad scaling. Currently, I scale like
%scale residual and jacobian
J = J.*W(:);
I also tried different scaling approaches which are very common in my field. So I think the issue is not related to scaling the optimization problem.
As you can see, rank(J)=18 at all iterations, which (if I am not mitstaken) indicates that the parameters are not linearly dependent on each other.
Having all that said, what might be reasons why I have so high condition numbers and what could I try to reduce them? Is my data vector g_exp maybe not appropriate?
I am also wondering why the optimization works very well under these circumstances.

Réponse acceptée

Matt J
Matt J le 2 Juin 2023
Modifié(e) : Matt J le 2 Juin 2023
Your scaling isn't really doing anything meaningful, since all residuals are weighted equally. Really, you are just multiplying your Jacobian by a scalar, which cannot change its cond() number, e.g.,
ans = 8.0366
ans = 8.0366
Also, another scaling you need to consider, maybe even more importantly than the scaling of the residual, are the units of your parameters. This would introduce weights on the columns of your Jacobian, not just the rows.
Another possibility is simply that your problem is over-parametrized, creating a continuum of non-unique solutions. That's not always a big deal, but may account for why the optimization seems to "fail" (in your words) at different initial points. You haven't described what you're interpreting as a failure, but I'm assuming it means you get unexpected results for the parameters.
As you can see, rank(J)=18 at all iterations, which (if I am not mitstaken) indicates that the parameters are not linearly dependent on each other.
Right, but that's often not helpful. You are using the rank() command's default tolerance setting, which may or may not be appropriate for your problem. It's very easy to construct matrices which pass rank()'s default criteria, but shouldn't be considered full rank, e.g.,
ans = 1.0000e+14
ans = 2
ans = 1
  39 commentaires
Matt J
Matt J le 7 Juin 2023
Modifié(e) : Matt J le 7 Juin 2023
No, I don't have a reference, but I think it should be obvious. A PDE, like any other equation, is of the form ceq(x)=0 which is the form that the Optimization Toolbox solvers require for nonlinear equality constraints.

Connectez-vous pour commenter.

Plus de réponses (1)

John D'Errico
John D'Errico le 2 Juin 2023
Modifié(e) : John D'Errico le 2 Juin 2023
@SA-W - A full rank does NOT mean the parameters are not linearly dependent. With that high of a condition number, it often does mean they are VERY nearly dependent, just not exactly so. However, poor choices of units can often cause high condition numbers, and Matt has attempted to tell you exactly that. Listen to what Matt is telling you.
Consider these two matrices:
format long g
small = 1e-11;
A = [1 1+small;1 1]
A = 2×2
1 1.00000000001 1 1
B = [1 small/4;1 -small/4]
B = 2×2
1 2.5e-12 1 -2.5e-12
ans =
ans =
Both matrices have almost identically the same (and very large) condition number. However the A matrix cannot be simply repaired, because the columns are virtually linearly dependent. The B matrix has a problem essentially because of a poor choice of units. The two columns of B are very different. and in fact, are orthogonal to each other. However, the linear algebra will have difficulties with both cases, because the condition number of B is as large as that of A.

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by