Numerical calculation of Hessian for a very complex problem

40 vues (au cours des 30 derniers jours)
Hello,
I need to estimate the numerical Hessian for my very complex high-dimensional function F. I first explain a bit on the complexity of my problem. The global minimum of F which I call x0 has been found by a very sophisticated optimizer. It was a difficult optimization problem because In any neighboorhood about x0 F is undefined for infinite number of points and is also defined for infinite number of points. Now, I need to calculate the Hessian of F at x0.
Therefore, traditional numerical approaches create NAN in the process of estimating second order partial derivatives. A workaround to this problem is to consider an infinitesimal perturbation to each points in the considered stencil (for instance in 1D differentiation to estimate f'(x0), we need f(x0+h) and f(x0-h). So, if each of the later are not defined I consider a small perturbation to x0+h and x0-h and re-do the calculation till they are defined). But, this is going to be very time conssuming to code if I want to make a code by myself. So, I would like to first see if I can handle this issue using the current capacity of MATLAB.
fmincon can calculate numerical Hessians for my problem (it seems it uses a powerful method to overcome the NANs problems). However, it matters a lot if I am not careful with the inputs of fmincon. Consider the following cases:
1) [~,~,~,~,~,~,hess1] = fmincon(F,x0,[],[],[],[],lb,ub,[]);
2) [~,~,~,~,~,~,hess2] = fmincon(F,x0,[],[],[],[],[],[],[]);
3) [~,~,~,~,~,~,hess3] = fmincon(F,x0,[],[],[],[],lb,ub,nonlcon);
I get different results in cases 1 and 2. This does not make sense to me because as is explained in MATLAB (i.e, the link https://nl.mathworks.com/help/optim/ug/hessian.html ) the Hessian being calculated with fmincon does not require 'constant' constraints. With regard to case 3, I am a bit afraid. The reason is that my constraint is very complex and cannot be defined explicitly in terms of x. So, I cannot do the case 3. However, I guess that 3 should be correct formulation. If this is the case then I wonder how much error will be produced by ignoring the nonlinear inequality constraints?
I also tried 'fminunc' but it produced NANs and cannot estimate the Hessian. I did also tried the wonderful matlab package DERIVEST but unfortunately it cannot handle my problem.
So, I am left to either use case 1 or case 2. Which one is correct? Or, are both incorrect? (I wish not!)
Any idea?
Thanks in advance!
Babak
  3 commentaires
Mohammad Shojaei Arani
Mohammad Shojaei Arani le 1 Déc 2022
Hello Matt,
Please note that F is defined at x0. You are right, F is nowhere differentiable. So, traditional approaxhed fail.
Mohammad Shojaei Arani
Mohammad Shojaei Arani le 1 Déc 2022
Well, what I mentioned was a bit wrong.
You are right. For a function to be differentiable it must be defined in some neighborhood of x0. My function should also be defined in a very small neighboorhood of x0 but this neighboorhood might be super small region about x0 and I do know how small it is.

Connectez-vous pour commenter.

Réponse acceptée

Bruno Luong
Bruno Luong le 1 Déc 2022
Modifié(e) : Bruno Luong le 1 Déc 2022
Never trust fmincon hessian. The hessian output is approximation for solely the goal of determine the descend direction internally. In many setup it only returns non-negative "projection" of hessian by design and it roughly approximate the shape of the objective function by various visiting points, so it is quite random depending on the history of the optimization.
F=@(x) -1.5*(norm(x).^2)
F = function_handle with value:
@(x)-1.5*(norm(x).^2)
x0=[0 0];
lb=[-1 -1];
ub=[1 1];
[~,~,~,~,~,~,hess1] = fmincon(F,x0,[],[],[],[],lb,ub,[]);
Initial point is a local minimum that satisfies the constraints. Optimization completed because at the initial point, the objective function is non-decreasing in feasible directions to within the value of the optimality tolerance, and constraints are satisfied to within the value of the constraint tolerance.
hess1 % completely wrong, the right Hessian is -eye(2), so the sign is wrong
hess1 = 2×2
1 0 0 1
x0=[1 1];
lb=[-1 -1];
ub=[1 1];
[~,~,~,~,~,~,hess2] = fmincon(F,x0,[],[],[],[],lb,ub,[]);
Local minimum found that satisfies the constraints. Optimization completed because the objective function is non-decreasing in feasible directions, to within the value of the optimality tolerance, and constraints are satisfied to within the value of the constraint tolerance.
hess2 % completely wrong, the right Hessian is -eye(2), so the sign is wrong
hess2 = 2×2
0.5000 -0.5000 -0.5000 0.5000
  3 commentaires
Bruno Luong
Bruno Luong le 1 Déc 2022
Modifié(e) : Bruno Luong le 1 Déc 2022
Everything is wrong. When it's wrong then it's wrong, don't try to find correctness of amplitude, phase, sign etc... you simply lost your time to search something that is not right.
PS: yeah I want to put 0.5 in front of the objective function, but you get the point.
Mohammad Shojaei Arani
Mohammad Shojaei Arani le 1 Déc 2022
Thanks Bruno!

Connectez-vous pour commenter.

Plus de réponses (1)

Matt J
Matt J le 1 Déc 2022
Modifié(e) : Matt J le 1 Déc 2022
but this neighboorhood might be super small region about x0 and I do know how small it is.
I assume you meant to say you don't know how small it is. In order for finite differencing calculations to be possible, the neighborhood radius will have to be larger than floating point precision, so you might need to do some analysis to see if this is true.
In any case, you might try John's numerical differentiation toolkit:
  1 commentaire
Mohammad Shojaei Arani
Mohammad Shojaei Arani le 1 Déc 2022
Thanks Matt!
Haha! Yes, you are right. I (also God, assuming it exist) do not know how small that region is!
Yes. John's DERIVEST package id super wonderful. I am going to try it.

Connectez-vous pour commenter.

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by