Why trust-region-reflective algorithm in fmincon require a gradient as input?

16 vues (au cours des 30 derniers jours)
SM
SM le 25 Fév 2022
Commenté : Jan le 27 Juin 2023
Hello
According to Matlab fmincon documentation, the 'trust-region-reflective' algorithm needs to specify the objective gradient.
1- Why does it need gradient as input?
2- Does the objective function have to be analytical? since my objective function, is function handle that calculates some scalar as output
Thanks for your help in advance
  12 commentaires
Bruno Luong
Bruno Luong le 26 Fév 2022
"I have 120 variables (x) and trying to minimise my objective function (f). The objective function is calculated numerically at each iteration, and I cannot calculate its gradient since there is no explicit formula."
This statement is simply not true. I have minimizes a function with 1e6 variables with no explicit-formula (solving a PDE) and I can compute the exact gradient by chain rules and adjoint equation. The only requirent is your code that compute the objective function only uses diffentiable functions and operators (so no min, max, if) within it.
Walter Roberson
Walter Roberson le 26 Fév 2022
I could imagine situations such as the function involving solving integral equations where no closed form formula exists. For example ODE often have a general solution involving the exp() of the sum of the roots of an integral equation.

Connectez-vous pour commenter.

Réponse acceptée

Matt J
Matt J le 25 Fév 2022
Modifié(e) : Matt J le 25 Fév 2022
Does the objective function have to be analytical? since my objective function, is function handle that calculates some scalar as output
I'm not sure what difference you see between a function being "analytical" and "specified by a function handle". If you mean, does fmincon require the objective function be specified as a single-line mathematical formula, the answer is no.
Why does it need gradient as input?
I believe it is because the trust-region algorithm is intended for problems of large dimension. If so, it would create a pitfall for users if finite difference approximations to the gradient were used by default, since for high dimensional problems, that would be very expensive computationally.
Regardless, even though the trust-region algorithm requires you to supply your own gradient calculation, you are of course free to do that calculation with your own finite difference implementation, or with one from the File Exchange (Example). However, as Torsten says, it might also be that the algorithm is sensitive to errors in the derivative calculations, so I'm not sure how well that would work.
  1 commentaire
SM
SM le 26 Fév 2022
Thank you, @Matt J for the clarification. I understand that there is no mathematical obstacle, just programming considerations.

Connectez-vous pour commenter.

Plus de réponses (2)

Zaikun Zhang
Zaikun Zhang le 26 Fév 2022
What you @SM are looking for is called Derivative-Free Optimization methods. They do not require derivatives or analytical formulations of the objective/constraint function. You may have a look at PDFO (Powell's Derivative-Free Optimization solvers):

Bruno Luong
Bruno Luong le 26 Fév 2022
Modifié(e) : Bruno Luong le 26 Fév 2022
I don't think Matt's answer is correct, rather Torsen answer is better. The Trust region algorithm must approximate the objective function in a current "trust region". It is usually use the cost function and gradient in many points and create a quadratic function approximating. That requires both cost function and gradient calculation are continuous with respect to the points at which the gradients (and the objectiive) are evaluated. Therefore the numerical finite differences with adaptive step is not suitable, since there is no known method to estimate the step continuously with the point.
Therefore TR algorithm requires user to supply the "exact" gradient. Otherwise it won't converge.
Better still, if user provides the Hessian x vector calculation, it will help the TR to work even more efficiently.
  5 commentaires
Bruno Luong
Bruno Luong le 26 Fév 2022
OK, I miss that.
But my point does not change, if they compute H by finite differences on g, it requres the g to be exact so as H is meaningfuly estimatedl. Bottom line is the TR requires more accuracy on g, not because of the computation time.
Jan
Jan le 27 Juin 2023
Interesting to follow you discussion. We have a recent application of TR to energy minimizations ( link ) using the finite element method. It converges reasonably even with the approximated gradient.

Connectez-vous pour commenter.

Produits


Version

R2021a

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by