Quadratically constrained linear maximisation problem: issues with fmincon
Afficher commentaires plus anciens
I would like to solve the following quadratically constrained linear programming problem.

I have written a Matlab code (R2091b) that solve the problem using Gurobi. Now, I would like to rewrite the code using fmincon instead of Gurobi. This is because the optimisation problem will have to be solve thousands of times and Gurobi's academic license does not allow to parallelise via array jobs in a cluster. However, I'm encountering a huge problem: Gurobi takes 0.23 second to give a solution, fmincon takes 13 sec. I suspect this should be due to my mistakes/inefficiences in providing gradient, hessian, etc. Could you kindly help me to improve below?
Also, Gurobi gives me 0.2 as solution, fmincon gives 0.089. Can the accuracy of fmincon be improved without trying other starting points?
This is my code with Gurobi
clear
rng default
load matrices
%1) GUROBI
model.A=[Aineq; Aeq];
model.obj=f;
model.modelsense='max';
model.sense=[repmat('<', size(Aineq,1),1); repmat('=', size(Aeq,1),1)];
model.rhs=[bineq; beq];
model.ub=ub;
model.lb=lb;
model.quadcon(1).Qc=Q;
model.quadcon(1).q=q;
model.quadcon(1).rhs=d;
params.outputflag = 0;
result=gurobi(model, params);
max_problem_Gurobi=result.objval;
This is my code with fmincon
%2) FMINCON
options = optimoptions(@fmincon,'Algorithm','interior-point',...
'SpecifyObjectiveGradient',true,...
'SpecifyConstraintGradient',true,...
'HessianFcn',@(z,lambda)quadhess(z,lambda,Q));
fun = @(z)quadobj(z,f.');
nonlconstr = @(z)quadconstr(z,Q,d);
[~,fval] = fmincon(fun,z0,Aineq,bineq,Aeq,beq,lb,ub,nonlconstr,options);
max_problem_fmincon=-fval;
function [y,yeq,grady,gradyeq] = quadconstr(z,Q,d)
y= z'*Q*z -d;
yeq = []; %no quadratic inequalities
if nargout > 2
grady = 2*Q*z;
end
gradyeq = []; %no quadratic inequalities
end
function hess = quadhess(z,lambda,Q) %#ok<INUSL>
hess = 2*lambda.ineqnonlin*Q;
end
function [y,grady] = quadobj(z,f)
y = -f'*z;
if nargout > 1
grady = -f;
end
Further, if I run the code with fmincon with as starting point the optimal point given by Gurobi, I still get the solution 0.089 (instead of 0.2 as in Gurobi). Why?
%3) FMINCON
z0=result_Gurobi.x;
options = optimoptions(@fmincon,'Algorithm','interior-point',...
'SpecifyObjectiveGradient',true,...
'SpecifyConstraintGradient',true,...
'HessianFcn',@(z,lambda)quadhess(z,lambda,Q));
fun = @(z)quadobj(z,f.');
nonlconstr = @(z)quadconstr(z,Q,d);
[~,fval] = fmincon(fun,z0,Aineq,bineq,Aeq,beq,lb,ub,nonlconstr,options);
max_problem_fmincon=-fval;
function [y,yeq,grady,gradyeq] = quadconstr(z,Q,d)
y= z'*Q*z -d;
yeq = []; %no quadratic inequalities
if nargout > 2
grady = 2*Q*z;
end
gradyeq = []; %no quadratic inequalities
end
function hess = quadhess(z,lambda,Q) %#ok<INUSL>
hess = 2*lambda.ineqnonlin*Q;
end
function [y,grady] = quadobj(z,f)
y = -f'*z;
if nargout > 1
grady = -f;
end
end
18 commentaires
Torsten
le 27 Mar 2020
grady = 2*Q*z
Same for hess.
CT
le 27 Mar 2020
Torsten
le 27 Mar 2020
In quadobj, y=-f'*x and grady = -f since you want to maximize.
CT
le 27 Mar 2020
Torsten
le 27 Mar 2020
What if you don't specify derivatives ?
CT
le 27 Mar 2020
Torsten
le 27 Mar 2020
Strange.
CT
le 27 Mar 2020
CT
le 30 Mar 2020
Torsten
le 30 Mar 2020
I don't have access to Matlab at the moment.
Did you check whether both solutions (Gurobi and Matlab) satisfy the constraints ?
Did you try a different solver in Matlab (e.g. sqp) ?
CT
le 30 Mar 2020
Torsten
le 30 Mar 2020
What about the quadratic constraint and the bound constraints ?
Matt J
le 30 Mar 2020
I suggest you include the results from Gurobi in the matrices.mat file as well, so we can study it.
CT
le 30 Mar 2020
I suspect this should be due to my mistakes/inefficiences in providing gradient, hessian, etc.
I don't think that is the reason. I think the main reason is that Gurobi apparently has specific mechanisms for handling quadratic constraints, as suggested by this segment of code.
model.quadcon(1).Qc=Q;
model.quadcon(1).q=q;
model.quadcon(1).rhs=d;
Conversely, fmincon has no way of distinguishing quadratic constraints from other more general nonlinear constraints and giving them special handling. It handles all nonlinear constraints in the same way.
That said, the following implementation of your functions would be slightly more efficient.
function [y,yeq,grady,gradyeq] = quadconstr(z,Q,d)
Qz=Q*z;
y= z'*Qz - d;
yeq = []; %no quadratic inequalities
if nargout > 2
grady = 2*Qz;
gradyeq = []; %no quadratic inequalities
end
end
function [y,grady] = quadobj(z,f)
grady = -f;
y = grady.'*z;
end
CT
le 31 Mar 2020
Réponses (1)
Matt J
le 30 Mar 2020
Well, it would be interesting to know what algorithm Gurobi uses, but the issue of the objective function difference appears to be a matter of the tolerances
options = optimoptions(@fmincon,'Algorithm','interior-point',...
'SpecifyObjectiveGradient',true,...
'SpecifyConstraintGradient',true,...
'HessianFcn',@(z,lambda)quadhess(z,lambda,Q),...
'StepTolerance',1e-30,'OptimalityTolerance',1e-10);
fun = @(z)quadobj(z,f);
nonlconstr = @(z)quadconstr(z,Q,d);
tic;
[~,fval] = fmincon(fun,z0(:),Aineq,bineq,Aeq,beq,lb,ub,nonlconstr,options);
toc
max_problem_fmincon=-fval
max_problem_fmincon =
0.2000
10 commentaires
Maybe transform the problem into a space where Q is diagonal. Also, maybe impose a quadratic equality constraint instead of inequality. It is easy to verify in advance whether the solution satisfies the quadratic constraint with strict inequality. The optimality conditions in that case are the same as for when the quadratic constraint is absent, leaving only the linear constraints. You can therefore use linprog, and see if the solution it gives happens to satisfy the quadratic constraints.
CT
le 30 Mar 2020
1) Your Q is symmetric with eigendecomposition Q=R*D*R.', so if we make the change of variables x=R.'*z, the problem will have the same form in terms of x (linear objective and constraints), and the quadaratic constraint will become
Since D is diagonal, this may be simpler for an iterative solver to try to satisfy.
2) Correct.
Matt J
le 30 Mar 2020
Maybe transform the problem into a space where Q is diagonal.
On 2nd thought, that will probably ruin the sparsity of your linear constraints.
CT
le 30 Mar 2020
Matt J
le 30 Mar 2020
Yes, I think the sparsity is important for speed.
CT
le 31 Mar 2020
Matt J
le 31 Mar 2020
And does the problem data from the thousands of problem instances that you are trying to solve change in a continuous incremental way? If you had the optimal solution for one instance of the problem, would it serve as a good initial estimate for the next instance?
Catégories
En savoir plus sur Quadratic Programming and Cone Programming dans Centre d'aide et File Exchange
Produits
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!

