In general, solvers return a local minimum. The result might be a global minimum, but there is no guarantee that it is. This section describes why solvers behave this way, and gives suggestions for ways to search for a global minimum, if needed.
A local minimum of a function is a point where the function value is smaller than at nearby points, but possibly greater than at a distant point.
A global minimum is a point where the function value is smaller than at all other feasible points.
Generally, Optimization Toolbox™ solvers find a local optimum. (This local optimum can be a global optimum.) They find the optimum in the basin of attraction of the starting point. For more information about basins of attraction, see Basins of Attraction.
There are some exceptions to this general rule.
Linear programming and positive definite quadratic
programming problems are convex, with convex feasible regions, so
there is only one basin of attraction. Indeed, under certain choices
linprog ignores any user-supplied
starting point, and
quadprog does not require
one, though supplying one can sometimes speed a minimization.
Optimization Toolbox functions,
simulannealbnd, attempt to search more
than one basin of attraction.
If you need a global optimum, you must find an initial value for your solver in the basin of attraction of a global optimum.
Suggestions for ways to set initial values to search for a global optimum:
Use a regular grid of initial points.
Use random points drawn from a uniform distribution if your problem has all its coordinates bounded. Use points drawn from normal, exponential, or other random distributions if some components are unbounded. The less you know about the location of the global optimum, the more spread-out your random distribution should be. For example, normal distributions rarely sample more than three standard deviations away from their means, but a Cauchy distribution (density 1/(π(1 + x2))) makes hugely disparate samples.
Use identical initial points with added random perturbations on each coordinate, bounded, normal, exponential, or other.
The more you know about possible initial points, the more focused and successful your search will be.
If an objective function f(x) is smooth, the vector –∇f(x) points in the direction where f(x) decreases most quickly. The equation of steepest descent, namely
yields a path x(t) that goes to a local minimum as t gets large. Generally, initial values x(0) that are near to each other give steepest descent paths that tend to the same minimum point. The basin of attraction for steepest descent is the set of initial values that lead to the same local minimum.
The following figure shows two one-dimensional minima. The figure shows different basins of attraction with different line styles, and shows directions of steepest descent with arrows. For this and subsequent figures, black dots represent local minima. Every steepest descent path, starting at a point x(0), goes to the black dot in the basin containing x(0).
The following figure shows how steepest descent paths can be more complicated in more dimensions.
One basin of attraction, showing steepest descent paths from various starting points
The following figure shows even more complicated paths and basins of attraction.
Several basins of attraction
Constraints can break up one basin of attraction into several pieces. For example, consider minimizing y subject to:
y ≥ |x|
y ≥ 5 – 4(x–2)2.
The figure shows the two basins of attraction with the final points.
The steepest descent paths are straight lines down to the constraint boundaries. From the constraint boundaries, the steepest descent paths travel down along the boundaries. The final point is either (0,0) or (11/4,11/4), depending on whether the initial x-value is above or below 2.