How to use lsqnonlin on a function which is mostly linear?

3 vues (au cours des 30 derniers jours)
Iain McClatchie
Iain McClatchie le 2 Avr 2019
Modifié(e) : Matt J le 3 Avr 2019
I have a (large) least squares minimization problem which is almost linear.
I want to find a set of weights for a small (7x7) kernel which best filters noise from a greyscale image. I have a reference no-noise image which has spatial correlation from both lens blur and structure in the scene. I also know how much measurement noise (approximately gaussian) will be added to each pixel.
My approach is to produce a matrix A with hundreds of thousands of rows (n), one for each pixel, and 49 columns, one for each pixel in the neighborhood of a filtered pixel. A contains the reference no-noise image data. I also have the desired output Y, which has the hundreds of thousands of rows (one for each pixel) and a single column (the center pixel), also no-noise image data. I'm looking for a 49x1 weight vector x.
When taking a picture, every pixel will have error added with variance V to each pixel.
So I'm looking for the x that least squares minimizes sum((A*x - Y).^2) + n * x'*x * V
The trivial filter weights are zero for everything but the center pixel. These give A*x-Y=0, and sees n*V summed variance across the entire picture. I use this as my starting point. I have the optimization working now with lsqnonlin, in which the optimizer figures out how much to weight the surrounding pixels to trade reduction in variance for an increase in estimation inaccuracy.
Yes, there are linear constraints on the weights (must sum to 1, center of mass must be centered). I actually have fewer than 49 optimization variables that I transform into the weights to ensure these constraints, but (a) that part works, and (b) it's an unnecessary complication to the problem I'd like help with.
The trouble is that my objective function is giving lsqnonlin a gigantic result with n+1 terms (one extra term is sqrt(n*x'*x*V)), so that the Jacobian is enormous (40-odd optimization variables by hundreds of thousands of rows). The optimizer (trust-region-reflective) is stopping after it exceeds the function evaluation limit (97 iterations, 4400 evaluations), while it still has a largish stepsize, so I don't think I've found a good minimum yet. It's also slow and I'd like to run thousands of these and expand the number of rows a hundredfold.
I think my approach is very inefficient and there is probably a better way. What is it?

Réponse acceptée

Matt J
Matt J le 2 Avr 2019
Modifié(e) : Matt J le 3 Avr 2019
I think you should just use quadprog. No Jacobians involved and it will be easier to add in your linear constraints later on.
H=A.'*A + n*V*eye(49); %fixed
f=-(Y.'*A).';
x = quadprog(H,f);
  5 commentaires
Matt J
Matt J le 3 Avr 2019
Modifié(e) : Matt J le 3 Avr 2019
Yes, I think your derivation is the correct one. I obviously had one or two mistakes.
Iain McClatchie
Iain McClatchie le 3 Avr 2019
Great. Now I believe I understand what I'm doing. Thanks for the help. And you were right, dropping the linear constraints in was a snap, way easier than the transform I was doing with lsqnonlin().

Connectez-vous pour commenter.

Plus de réponses (0)

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by