Effacer les filtres
Effacer les filtres

The sufficient and necessary conditon of fmincon funciton can find out the global optimal solution but not some other local optimal solutions

6 vues (au cours des 30 derniers jours)
I'm not quite clear about that question, by experiments I found out that fmincon can also find out the global optimal solution though f(x) is unconvex, that proves that convex is just a sufficient condition but not necessary. So who can help me that question, also please give the provement of that. It's better to be generic.

Réponse acceptée

Matt J
Matt J le 2 Avr 2024
Modifié(e) : Matt J le 2 Avr 2024
fmincon has no awareness of, and so does not consider, the function's global properties, like whether it is convex or not. It only applies local optimality conditions, based on the function's derivatives.
Moreover, it doesn't apply sufficiency conditions, only necessary ones. As an example, clearly x=0 is not a global or local minimum of f(x)=x^3, but fmincon believes it is regardless:
x=fmincon(@(z) z.^3,0)
Initial point is a local minimum that satisfies the constraints. Optimization completed because at the initial point, the objective function is non-decreasing in feasible directions to within the value of the optimality tolerance, and constraints are satisfied to within the value of the constraint tolerance.
x = 0
This happens because x=0 has zero gradient, and therefore satisfies necessary local conditions, but nothing beyond that.
  5 commentaires
Matt J
Matt J le 2 Avr 2024
Modifié(e) : Matt J le 2 Avr 2024
Yes, an infinite precision computer with zero error tolerance settings would converge to the global mimimum, assuming it is the only feasible stationary point. Convexity is not needed for that.

Connectez-vous pour commenter.

Plus de réponses (2)

Torsten
Torsten le 2 Avr 2024
Déplacé(e) : Torsten le 2 Avr 2024
The answer is: there is no necessary and sufficient condition for "fmincon" to converge to the global optimum.
I think it would be great breakthrough in mathematics if such a magic condition existed.
  4 commentaires
世轩 张
世轩 张 le 2 Avr 2024
Exactly correct, but I don't assure it's convex. Do you mean that the fmincon can work in my situation? Could you give me some provements? Just be simple.
Torsten
Torsten le 2 Avr 2024
An algorithm (which assumes infinite precision in all computations involved) can be proven to work, but I've never heard about a proof that a numerical solver will always work under given assumptions.

Connectez-vous pour commenter.


John D'Errico
John D'Errico le 2 Avr 2024
Modifié(e) : John D'Errico le 2 Avr 2024
I'm sorry, but no. Fmincon CANNOT be assured of always finding a global solution. In fact, I can think of many ways it will fail. If your experiments have proven anything, it is just that your experiments were inadequate.
Of course, if your objective function has multiple local solutions, it must fail if your starting values start it in the basin of attraction of a local but not the global solution.
Even in the case of a purely linear objective though, consider the case of a U-shaped domain, where the global solution lies in one of the ends of the U. But now start fmincon in the wrong lobe of the U. Since it cannot climb uphill on such a problem, it will be trapped in that local sub-optimum. Remmber, fmincon is a constrained solver, so a U-shaped domain is allowable.
If you decide to remove even that simple failure mode, I can still construct a function that has only one solution, ONE minimizer, on an unbounded domain, and fmincon will provably fail. Consider a function of two variables:
f = @(X) -exp(-(X(1) - 1000)^2 - (X(2) - 1000)^2);
Start it in the vicinity of (X,Y) = (0,0). Near the origin, the gradient of that function, as computed in double precision arithmetic, is an underflow to all zeros. There the function is constant at 0 to far beyond the ability of double precision arithmetic to measure. Of course, the function has a single global minimum at (1000,1000).
f([0,0])
ans = 0
f([1000,1000])
ans = -1
If the above case is just too difficult, then consider even this simpler example.
f = @(X) 1e-1000*(X(1)^2 + X(2)^2);
So f is a PURELY quadratic function. Apply no constraints at all. So simple that the solution is obvious. The solution lies at the origin. Now, start it at the point (1,1). Again, fmincon will fail when working in double precision arithmetic, REGARDLESS of the tolerance you apply. This is because any computation of f in that vicinity will yield again an underflow to f==0. The gradient in that vicinity is also an underflow to zeros of course.
I can surely give other examples, equally easy to construct, where it is easy to show that fmincon must fail. Functions that are not everywhere differentiable would be another failure mode.
So if you want a necessary AND sufficient condition for convergence of fmincon to a global solution, then you need to work on only smooth objectives simple enough that the domain is purely convex. You need to consider problems only where there are no issues with double precision arithmetic, and the gradient will be always computable, where underflows or overflows are not an issue. And you need to consider problems that lack multiple local solutions.
I don't need to prove any of these examples, because a counter-example is sufficient proof in itself.
  1 commentaire
世轩 张
世轩 张 le 2 Avr 2024
Thanks for your patience, I understand, my conditon won't exist the conditon that the reciprocal direction tends too close to zero that is able to fool the computer, so if my condition have no local optimum other than the gloabal optimum, whether it's convex or not, the fmincon will find the glabal optimum, right?

Connectez-vous pour commenter.

Catégories

En savoir plus sur Surrogate Optimization dans Help Center et File Exchange

Produits


Version

R2024a

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by