*Terminal weights* are the quadratic weights *Wy* on
*y*(*t*+*p*) and *Wu* on
*u*(*t* + *p* – 1). The variable
*p* is the prediction horizon. You apply the quadratic weights at time
*k* +*p* only, such as the prediction horizon’s final
step. Using terminal weights, you can achieve infinite horizon control that guarantees
closed-loop stability. However, before using terminal weights, you must distinguish between
problems with and without constraints.

*Terminal constraints* are the constraints on
*y*(*t* + *p*) and
*u*(*t* + *p* – 1), where
*p* is the prediction horizon. You can use terminal constraints as an
alternative way to achieve closed-loop stability by defining a terminal region.

You can use terminal weights and constraints only at the command line. See `setterminal`

.

For the relatively simple unconstrained case, a terminal weight can make the finite-horizon model predictive controller behave as if its prediction horizon were infinite. For example, the MPC controller behavior is identical to a linear-quadratic regulator (LQR). The standard LQR derives from the cost function:

$$J(u)={\displaystyle \sum _{i=1}^{\infty}x{(k+i)}^{T}Qx(k+i)+u{(k+i-1)}^{T}Ru(k+i-1)}$$ | (1) |

where *x* is the vector of plant states in the standard state-space
form:

$$x\left(k+1\right)=Ax+Bu\left(k\right)$$ | (2) |

The LQR provides nominal stability provided matrices Q and R meet certain conditions. You can convert the LQR to a finite-horizon form as follows:

$$J(u)={\displaystyle \sum _{i=1}^{p-1}[x{(k+i)}^{T}Qx(k+i)+u{(k+i-1)}^{T}Ru(k+i-1)}]+x{(k+p)}^{T}{Q}_{p}x(k+p)$$ | (3) |

where *Q _{p}
*, the terminal penalty matrix, is the solution of the Riccati equation:

$${Q}_{p}={A}^{T}{Q}_{p}A-{A}^{T}{Q}_{p}B{({B}^{T}{Q}_{p}B+R)}^{-1}{B}^{T}{Q}_{p}A+Q$$ | (4) |

You can obtain this solution using the `lqr`

command in Control System
Toolbox™ software.

In general, *Q _{p}* is a full (symmetric) matrix. You
cannot use the Standard Cost Function to implement the LQR cost
function. The only exception is for the first

Augment the model (Equation 2) to include the weighted terminal states as auxiliary outputs:

*y*(_{aug}*k*) =*Q*_{c}*x*(*k*)where

*Q*is the Cholesky factorization of_{c}*Q*such that_{p}*Q*=_{p}*Q*_{c}^{T}*Q*._{c}Define the auxiliary outputs

*y*as unmeasured, and specify zero weight to them._{aug}Specify unity weight on

*y*at the last step in the prediction horizon using_{aug}`setterminal`

.

To make the model predictive controller entirely equivalent to the LQR, use a control horizon equal to the prediction horizon. In an unconstrained application, you can use a short horizon and still achieve nominal stability. Thus, the horizon is no longer a parameter to be tuned.

When the application includes constraints, the horizon selection becomes important. The constraints, which are usually softened, represent factors not considered in the LQR cost function. If a constraint becomes active, the control action deviates from the LQR (state feedback) behavior. If this behavior is not handled correctly in the controller design, the controller may destabilize the plant.

For an in-depth discussion of design issues for constrained systems see [1]. Depending on the situation, you might need to include terminal constraints to force the
plant states into a defined region at the end of the horizon, after which the LQR can drive
the plant signals to their targets. Use `setterminal`

to add such constraints to the controller definition.

The standard (finite-horizon) model predictive controller provides comparable performance, if the prediction horizon is long. You must tune the other controller parameters (weights, constraint softening, and control horizon) to achieve this performance.

Robustness to inaccurate model predictions is usually a more important factor than nominal performance in applications.

[1] Rawlings, J. B., and David Q. Mayne “Model Predictive Control: Theory and Design” Nob Hill Publishing, 2010.