Main Content

Nonnegative Linear Least Squares, Problem-Based

This example shows how to use several algorithms to solve a linear least squares problem with the bound constraint that the solution is nonnegative. A linear least squares problem has the form

minxCx-d2.

In this case, constrain the solution to be nonnegative, x0.

To begin, load the arrays C and d into your workspace.

load particle

View the size of each array.

sizec = size(C)
sizec = 1×2

        2000         400

sized = size(d)
sized = 1×2

        2000           1

Create an optimization variable x of the appropriate size for multiplication by C. Impose a lower bound of 0 on the elements of x.

x = optimvar('x',sizec(2),'LowerBound',0);

Create the objective function expression.

residual = C*x - d;
obj = sum(residual.^2);

Create an optimization problem called nonneglsq and include the objective function in the problem.

nonneglsq = optimproblem('Objective',obj);

Find the default solver for the problem.

opts = optimoptions(nonneglsq)
opts = 
  lsqlin options:

   Options used by current Algorithm ('interior-point'):
   (Other available algorithms: 'active-set', 'trust-region-reflective')

   Set properties:
     No options set.

   Default properties:
              Algorithm: 'interior-point'
    ConstraintTolerance: 1.0000e-08
                Display: 'final'
           LinearSolver: 'auto'
          MaxIterations: 200
    OptimalityTolerance: 1.0000e-08
          StepTolerance: 1.0000e-12

   Options not used by current Algorithm ('interior-point')
   Default properties:
      FunctionTolerance: 2.2204e-14
    JacobianMultiplyFcn: []
         ObjectiveLimit: -1.0000e+20
    SubproblemAlgorithm: 'cg'
               TypicalX: 'ones(numberOfVariables,1)'


Solve the problem using the default solver.

[sol,fval,exitflag,output] = solve(nonneglsq);
Solving problem using lsqlin.

Minimum found that satisfies the constraints.

Optimization completed because the objective function is non-decreasing in 
feasible directions, to within the value of the optimality tolerance,
and constraints are satisfied to within the value of the constraint tolerance.

To see details of the optimization process, examine the output structure.

disp(output)
            message: 'Minimum found that satisfies the constraints....'
          algorithm: 'interior-point'
      firstorderopt: 9.9673e-07
    constrviolation: 0
         iterations: 9
       linearsolver: 'sparse'
       cgiterations: []
             solver: 'lsqlin'

The output structure shows that the lsqlin solver uses a sparse internal linear solver for the interior-point algorithm and takes 9 iterations to arrive at a first-order optimality measure of about 1e-6.

Change Algorithm

The trust-region-reflective algorithm handles bound-constrained problems. See how well it performs on this problem.

opts.Algorithm = 'trust-region-reflective';
[sol2,fval2,exitflag2,output2] = solve(nonneglsq,'Options',opts);
Solving problem using lsqlin.

Local minimum possible.

lsqlin stopped because the relative change in function value is less than the function tolerance.
disp(output2)
          algorithm: 'trust-region-reflective'
         iterations: 14
      firstorderopt: 5.2620e-08
       cgiterations: 54
    constrviolation: []
       linearsolver: []
            message: 'Local minimum possible....'
             solver: 'lsqlin'

This time, the solver takes more iterations and arrives at a solution with a lower (better) first-order optimality measure.

To get an even better first-order optimality measure, try setting the SubproblemAlgorithm option to 'factorization'.

opts.SubproblemAlgorithm = 'factorization';
[sol3,fval3,exitflag3,output3] = solve(nonneglsq,'Options',opts);
Solving problem using lsqlin.

Optimal solution found.
disp(output3)
          algorithm: 'trust-region-reflective'
         iterations: 11
      firstorderopt: 1.3973e-14
       cgiterations: 0
    constrviolation: []
       linearsolver: []
            message: 'Optimal solution found.'
             solver: 'lsqlin'

Using this option brings the first-order optimality measure nearly to zero, which is the best possible.

Change Solver

The lsqnonneg solver is specially designed to handle nonnegative linear least squares. Try that solver.

[sol4,fval4,exitflag4,output4] = solve(nonneglsq,'Solver','lsqnonneg');
Solving problem using lsqnonneg.
disp(output4)
    iterations: 184
     algorithm: 'active-set'
       message: 'Optimization terminated.'
        solver: 'lsqnonneg'

lsqnonneg does not report a first-order optimality measure. Instead, investigate the residual norms, which are returned in the fval outputs. To see the lower-significance digits, subtract 22.5794 from each residual norm.

t = table(fval - 22.5794, fval2 - 22.5794, fval3 - 22.5794, fval4 - 22.5794,...
    'VariableNames',{'default','trust-region-reflective','factorization','lsqnonneg'})
t=1×4 table
     default      trust-region-reflective    factorization    lsqnonneg 
    __________    _______________________    _____________    __________

    5.0804e-05          4.9179e-05            4.9179e-05      4.9179e-05

The default solver has a slightly higher (worse) residual norm than the other three, whose residual norms are indistinguishable at this level of display precision. To see which is lowest, subtract the lsqnonneg result from the two results.

disp([fval2 - fval4,fval3 - fval4])
   1.0e-12 *

    0.7176    0.7034

The lsqnonneg residual norm is the smallest by a nearly negligible amount. However, lsqnonneg takes the most iterations to converge.

See Also

Related Topics