fmincon doesn't find a global minimum for a convex function

In my opinion, fmincon is a built-in function for local minimum in matlab. If the objective function is a convex problem, there is only one basin and the local minimum is the global minimum. While starting from different initial points in my experiment, the algorithm got different minimums function. I wonder if fmincon guarantees to be converged to a global minimum for convex problem. If not, is there any other techiniques I can use for convex opimization as fast as possible? Is running GlobalSearch a good idea for global optimization? Thanks.
Here is the programming code:
options = optimoptions('fmincon');
problem.options = options;
problem.solver = 'fmincon';
problem.objective = @(x) langBW(x, in_s, in_e, C1, a, p_ul);
problem.Aineq = ones(1,user_num);
problem.bineq = BW2;
problem.nonlcon = @(x) nonlConstr_bw(x,a,p_ul,T1,in_s,in_e,BW2);
problem.x0 = ones(user_num,1)
[b_ul,fval] = fmincon(problem);
langBW is the objective function, which is a convex function of x , the code of langBW is as follow,
function fmin = langBW(x, in_s, in_e, C1, a, p_ul)
if size(x,1)<size(x,2)
x = x';
end
b_ul = x;
r_ul = b_ul .* log2(1 + a.*p_ul./b_ul);
fmin = sum((in_s+in_e).*p_ul./r_ul) + sum(C1);
end
The nonlConstr_bw is the function of nonlinear constraints. It is shown as follow,
function [c,ceq] = nonlConstr_bw(x,a,p_ul,T1,in_s,in_e)
user_num = size(p_ul,1);
if size(x,1)<size(x,2)
x = x';
end
b_ul = x;
r_ul = b_ul .* log2(1 + a.*p_ul./b_ul);
c1 = max(in_s./r_ul) + in_e./r_ul - T1;
c = c1;
ceq = zeros(user_num,1);
end
Except x , all other variables are supplied. The problem is that when I set different `problem.x0`, for example, when `problem.x0=ones(user_num,1);`, the solution of `[b_ul,fval] = fmincon(problem);` is different from that when `problem.x0=2*ones(user_num,1);`. That is what I am confused about.

2 commentaires

Could you give an example of a function this is happening for?

The objective is to minimize the sum of energy consumption by a group of users, and the only set of variables are the bandwidth allocation, denoted as $x_k$ here. Then the transmission rate is $r_k = x_k * log_2(1+\frac{g_k*p_k}{x_k})$. The optimization problem is as follow,

min_{x} sum_k \frac{p_k*b_k}{r_k}$
s.t. $sum_k x_k \leq X_{max}$

The objective and constraint are convex as the second-order derives of $x$ are positive.

When I use fmincon in matlab, I need to input an initial point, and then use

[x_opt, f_min] = fmincon(problem)

to get the minimum and solution of x. However, for initial point

x0_1 = ones(N,1);

and another initial point

x0_2 = 2*ones(N,1);

The final x_opt and f_min are different.

Connectez-vous pour commenter.

Réponses (2)

Aquatris
Aquatris le 10 Mai 2018
fmincon is not necessarily for finding local minima but it can get stuck in a local minimum due to the nature of the algorithms. You might wanna try changing the algorithms (interior point - trust region - sqp) used by fmincon or changing some of the parameters of optimization if you do not get reasonable results. Evolutionary algorithms tend to find global minimum a lot better. You might wanna give them a try. Matlab has a built-in genetic algorithm optimization functions if you have the required toolbox or you can find community supplied particle swarm optimization code.
Matt J
Matt J le 10 Mai 2018
Modifié(e) : Matt J le 10 Mai 2018
Even if the function is convex, there may be multiple global minima and the one you arrive at would depend on the initial point.
Keep in mind as well that your selection of stopping parameters (TolX, TolFun, etc...) have an influence on where the algorithm quits. Even if your function has a unique minimum, the algorithm will try to converge to it, but won't typically reach it exactly and the final error will be x0-dependent.

7 commentaires

The global minima are expected to have the same minimal value for a convex, however, here the minimum got different value from different initial points. My problem cannot user CVX tool as for the DCP ruleset, and I don't know if there are any other methods to find the global minimum. Can you please give me some advice?
Matt J
Matt J le 10 Mai 2018
Modifié(e) : Matt J le 10 Mai 2018
But how do you know that the size of the difference is significant? And how do you know the optimization ran to convergence in each case? What were the exitflags and first-order optimality measure at the final point?
The difference of final minimum can be 60 or more, which is a big value in my optimization.
Also, the final exitflag is always 1.
Matt J
Matt J le 10 Mai 2018
Modifié(e) : Matt J le 10 Mai 2018
Well, I think we've gone as far as we can go without actually trying it ourselves. I suggest you attach your b_k, p_k, etc... data in .mat file so we can all attempt the problem.
Hi, Matt, just update the code. Thanks if you can give some advice.
Matt J
Matt J le 11 Mai 2018
Modifié(e) : Matt J le 11 Mai 2018
We cannot run the code, because the problem constants (in_s, in_e, etc...) have not been provided.
I will mention, however, that your nonlinear constraints, while they may be convex, are not differentiable due to the max() operation. fmincon assumes differentiability. A differentiable equivalent to your constraint would be,
term1=in_s./r_ul;
term2=in_e./r_ul;
c = term1(:) + term2(:).' - T1;
ceq = [];

Connectez-vous pour commenter.

Modifié(e) :

le 11 Mai 2018

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by