# Convert Constraints in `for` Loops for Static Analysis

Static analysis increases the speed of `for` loops in problem-based expressions, as described in Static Analysis of Optimization Expressions. Static analysis applies to expressions created by `fcn2optimexpr`, not to the comparison operators `<=`, `==`, and `>=`. Therefore, when you have a constraint created using a `for` loop, subtract the right side of the constraint from both sides of the comparison so that the comparison is to zero, or to a constant. Then apply `fcn2optimexpr`.

This example has the constraints ${x}_{i}\le u+i-1$ and ${x}_{i}\ge -u-i+1$ for $i=1,2,...,20$. Begin by creating optimization variables.

```N = 20; x = optimvar("x",N); u = optimvar("u");```

To express the constraints in a `for` loop, subtract the appropriate values so that the constraints are compared to 0:

`$\begin{array}{c}{x}_{i}-u-i+1\le 0\\ {x}_{i}+u+i-1\ge 0.\end{array}$`

Typically, you express these constraints in the following code:

```for i = 1:N cons1(i) = x(i) - u - i + 1; cons2(i) = x(i) + u + i - 1; end ```

To apply static analysis, place this `for` loop in a separate helper function named `forloopfcn`. The resulting function appears at the end of this example. (You can easily convert a `for` loop in your script to a separate function, as shown in Create for Loop for Static Analysis.)

To take advantage of static analysis, convert the function to an optimization expression using `fcn2optimexpr`.

`[cons1,cons2] = fcn2optimexpr(@forloopfcn,x,u);`

Create an optimization problem and place the constraints in the problem.

```prob = optimproblem; prob.Constraints.cons1 = cons1 <= 0; prob.Constraints.cons2 = cons2 >= 0;```

Create an objective function that is a weighted sum of the `u` variable and the sum of squared differences of the `x` variables from the vector `1.5*(1:N)`.

```M = 100; % Weight factor for u prob.Objective = M*u + sum((x - 1.5*(1:N)').^2);```

Review the problem.

`show(prob)`
``` OptimizationProblem : Solve for: u, x minimize : x(1)^2 + x(2)^2 + x(3)^2 + x(4)^2 + x(5)^2 + x(6)^2 + x(7)^2 + x(8)^2 + x(9)^2 + x(10)^2 + x(11)^2 + x(12)^2 + x(13)^2 + x(14)^2 + x(15)^2 + x(16)^2 + x(17)^2 + x(18)^2 + x(19)^2 + x(20)^2 + 100*u - 3*x(1) - 6*x(2) - 9*x(3) - 12*x(4) - 15*x(5) - 18*x(6) - 21*x(7) - 24*x(8) - 27*x(9) - 30*x(10) - 33*x(11) - 36*x(12) - 39*x(13) - 42*x(14) - 45*x(15) - 48*x(16) - 51*x(17) - 54*x(18) - 57*x(19) - 60*x(20) + 6457.5 subject to cons1: -u + x(1) <= 0 -u + x(2) <= 1 -u + x(3) <= 2 -u + x(4) <= 3 -u + x(5) <= 4 -u + x(6) <= 5 -u + x(7) <= 6 -u + x(8) <= 7 -u + x(9) <= 8 -u + x(10) <= 9 -u + x(11) <= 10 -u + x(12) <= 11 -u + x(13) <= 12 -u + x(14) <= 13 -u + x(15) <= 14 -u + x(16) <= 15 -u + x(17) <= 16 -u + x(18) <= 17 -u + x(19) <= 18 -u + x(20) <= 19 subject to cons2: u + x(1) >= 0 u + x(2) >= -1 u + x(3) >= -2 u + x(4) >= -3 u + x(5) >= -4 u + x(6) >= -5 u + x(7) >= -6 u + x(8) >= -7 u + x(9) >= -8 u + x(10) >= -9 u + x(11) >= -10 u + x(12) >= -11 u + x(13) >= -12 u + x(14) >= -13 u + x(15) >= -14 u + x(16) >= -15 u + x(17) >= -16 u + x(18) >= -17 u + x(19) >= -18 u + x(20) >= -19 ```

Solve the problem.

`[sol,fval,exitflag,output] = solve(prob)`
```Solving problem using quadprog. Minimum found that satisfies the constraints. Optimization completed because the objective function is non-decreasing in feasible directions, to within the value of the optimality tolerance, and constraints are satisfied to within the value of the constraint tolerance. <stopping criteria details> ```
```sol = struct with fields: u: 4.1786 x: [20×1 double] ```
```fval = 653.3036 ```
```exitflag = OptimalSolution ```
```output = struct with fields: message: 'Minimum found that satisfies the constraints.↵↵Optimization completed because the objective function is non-decreasing in ↵feasible directions, to within the value of the optimality tolerance,↵and constraints are satisfied to within the value of the constraint tolerance.↵↵<stopping criteria details>↵↵Optimization completed: The relative dual feasibility, 1.421085e-16,↵is less than options.OptimalityTolerance = 1.000000e-08, the complementarity measure,↵1.536130e-10, is less than options.OptimalityTolerance, and the relative maximum constraint↵violation, 0.000000e+00, is less than options.ConstraintTolerance = 1.000000e-08.' algorithm: 'interior-point-convex' firstorderopt: 6.1445e-09 constrviolation: 0 iterations: 8 linearsolver: 'sparse' cgiterations: [] solver: 'quadprog' ```

View the `x` variables in the solution.

`disp(sol.x)`
``` 1.5000 3.0000 4.5000 6.0000 7.5000 9.0000 10.1786 11.1786 12.1786 13.1786 14.1786 15.1786 16.1786 17.1786 18.1786 19.1786 20.1786 21.1786 22.1786 23.1786 ```

View the constraint function values at the solution.

`evaluate(cons1,sol)`
```ans = 1×20 -2.6786 -2.1786 -1.6786 -1.1786 -0.6786 -0.1786 -0.0000 -0.0000 -0.0000 -0.0000 -0.0000 -0.0000 -0.0000 -0.0000 -0.0000 -0.0000 -0.0000 -0.0000 -0.0000 -0.0000 ```
`evaluate(cons2,sol)`
```ans = 1×20 5.6786 8.1786 10.6786 13.1786 15.6786 18.1786 20.3571 22.3571 24.3571 26.3571 28.3571 30.3571 32.3571 34.3571 36.3571 38.3571 40.3571 42.3571 44.3571 46.3571 ```

The `cons1` constraint specifies that the value is nonpositive. The first six values of `cons1` are negative, and the remaining values are zero to display precision. Therefore, at the solution, the `cons1` constraint is active for all values except the first six values. In contrast, the `cons2` constraint specifies that the value is nonnegative. All values of `cons2` are strictly positive, meaning this constraint is not active at the solution.

### Helper Function

This code creates the `forloopfcn` helper function.

```function [cons1,cons2] = forloopfcn(x,u) N = length(x); cons1 = zeros(1,N); cons2 = zeros(1,N); for i = 1:N cons1(i) = x(i) - u - i + 1; cons2(i) = x(i) + u + i - 1; end end```