Proximal Policy Optimization (PPO) Agent
Proximal policy optimization (PPO) is an on-policy, policy gradient reinforcement learning method for environments with a discrete or continuous action space. It directly estimates a stochastic policy and uses a value function critic to estimate the value of the policy. This algorithm alternates between sampling data through environmental interaction and optimizing a clipped surrogate objective function using stochastic gradient descent. The clipped surrogate objective function improves training stability by limiting the size of the policy change at each step [1]. For continuous action spaces, this agent does not enforce constraints set in the action specification; therefore, if you need to enforce action constraints, you must do so within the environment.
PPO is a simplified version of TRPO. Specifically, PPO has less hyperparameters and therefore is easier to tune, and less computationally expensive than TRPO. For more information on TRPO agents, see Trust Region Policy Optimization (TRPO) Agent. For more information on the different types of reinforcement learning agents, see Reinforcement Learning Agents.
In Reinforcement Learning Toolbox™, a proximal policy optimization agent is implemented by an rlPPOAgent
object.
Proximal policy optimization agents can be trained in environments with the following observation and action spaces.
Observation Space | Action Space |
---|---|
Discrete or continuous | Discrete or continuous |
Proximal policy optimization agents use the following actor and critics.
Critic | Actor |
---|---|
Value function critic V(S), which you
create using | Stochastic policy actor π(S), which
you create using |
During training, a Proximal policy optimization agent:
Estimates probabilities of taking each action in the action space and randomly selects actions based on the probability distribution.
Interacts with the environment for multiple steps using the current policy before using mini-batches to update the actor and critic properties over multiple epochs.
If the UseExplorationPolicy
option of the agent is set to
false
the action with maximum likelihood is always used in sim
and generatePolicyFunction
. As a result, the simulated agent and generated policy
behave deterministically.
If the UseExplorationPolicy
is set to true
the
agent selects its actions by sampling its probability distribution. As a result the policy is
stochastic and the agent explores its observation space.
Note
The UseExplorationPolicy
option affects only simulation and
deployment; it does not affect training. When you train an agent using train
, the agent
always uses its exploration policy independently of the value of this property.
Actor and Critic Used by the PPO Agent
To estimate the policy and value function, a proximal policy optimization agent maintains two function approximators.
Actor π(A|S;θ) — The actor, with parameters θ, outputs the conditional probability of taking each action A when in state S as one of the following:
Discrete action space — The probability of taking each discrete action. The sum of these probabilities across all actions is 1.
Continuous action space — The mean and standard deviation of the Gaussian probability distribution for each continuous action.
Critic V(S;ϕ) — The critic, with parameters ϕ, takes observation S and returns the corresponding expectation of the discounted long-term reward.
During training, the actor tunes the parameter values in θ to improve the policy. Similarly, during training, the critic tunes the parameter values in ϕ to improve its value function estimation. After training, the parameters remain at their tuned values in the actor and critic internal to the trained agent.
For more information on actors and critics, see Create Policies and Value Functions.
PPO Agent Creation
You can create and train proximal policy optimization agents at the MATLAB® command line or using the Reinforcement Learning Designer app. For more information on creating agents using Reinforcement Learning Designer, see Create Agents Using Reinforcement Learning Designer.
At the command line, you can create a PPO agent with default actor and critic based on the observation and action specifications from the environment. To do so, perform the following steps.
Create observation specifications for your environment. If you already have an environment object, you can obtain these specifications using
getObservationInfo
.Create action specifications for your environment. If you already have an environment object, you can obtain these specifications using
getActionInfo
.If needed, specify the number of neurons in each learnable layer of the default network or whether to use an LSTM layer. To do so, create an agent initialization option object using
rlAgentInitializationOptions
.Specify agent options using an
rlPPOAgentOptions
object. Alternatively, you can skip this step and modify the agent options later using dot notation.Create the agent using
rlPPOAgent
.
Alternatively, you can create actor and critic and use these objects to create your agent. In this case, ensure that the input and output dimensions of the actor and critic match the corresponding action and observation specifications of the environment.
Create observation specifications for your environment. If you already have an environment object, you can obtain these specifications using
getObservationInfo
.Create action specifications for your environment. If you already have an environment object, you can obtain these specifications using
getActionInfo
.Create an approximation model for your actor. For continuous action spaces, this model must be a neural network object. For discrete action spaces, you also have the option of using a custom basis function with initial parameter values.
Create an actor using
rlDiscreteCategoricalActor
(for discrete action spaces) orrlContinuousGaussianActor
(for continuous action spaces). Use the model you created in the previous step as a first input argument.Create an approximation model for your critic. For continuous action spaces you must use either a custom basis function with initial parameter values or a neural network object. For discrete action space you also have the option of using an
rlTable
object.Create a critic using
rlValueFunction
. Use the model you created in the previous step as a first input argument.If needed, specify agent options using an
rlPPOAgentOptions
object. Alternatively, you can skip this step and modify the agent options later using dot notation.Create the agent using
rlPPOAgent
.
PPO agents support actors and critics that use recurrent deep neural networks as function approximators.
For more information on creating actors and critics for function approximation, see Create Policies and Value Functions.
PPO Training Algorithm
Proximal policy optimization agents use the following training algorithm. To configure
the training algorithm, specify options using an rlPPOAgentOptions
object.
Initialize the actor π(A|S;θ) with random parameter values θ.
Initialize the critic V(S;ϕ) with random parameter values ϕ.
Generate E experiences (possibly running multiple episodes) by following the current policy. Specifically:
At the beginning of each episode, get the initial observation from the environment.
For the current observation S, select the action A using the policy in π(A|S;θ).
Execute action A. Observe the reward R and the next observation S'.
Store the experience (S,A,R,S').
To specify E use the
LearningFrequency
option.Divide the E experiences into sequences each having N experiences, numbered from ts to ts+N. For each experience sequence that does not contain a terminal state, N is equal to the
ExperienceHorizon
option value. Otherwise, N is less thanExperienceHorizon
and SN is the terminal state.For each step t = ts, ts+1, …, ts+N-1, of each sequence, compute the return and advantage function using the method specified by the
AdvantageEstimateMethod
option.Finite Horizon (
AdvantageEstimateMethod = "finite-horizon"
) — Compute the return Gt, which is the sum of the reward for that step and the discounted future reward [2].Here, b is
0
if Sts+N is a terminal state and1
otherwise. That is, if Sts+N is not a terminal state, the discounted future reward includes the discounted state value function, computed using the critic approximator V.Compute the advantage function Dt.
Generalized Advantage Estimator (
AdvantageEstimateMethod = "gae"
) — Compute the advantage function Dt, which is the discounted sum of temporal difference errors [3].Here, b is
0
if Sts+N is a terminal state and1
otherwise. λ is a smoothing factor specified using theGAEFactor
option.Compute the return Gt.
To specify the discount factor γ for either method, use the
DiscountFactor
option.Perform the following two operations for
NumEpoch
times:Using all the collected experiences, create at most B different mini-batches. To specify B, use the
MaxMiniBatchPerEpoch
option. Each mini-batch contains M different (typically nonconsecutive) experiences (Si,Ai,Ri,S'i) that are randomly sampled from the experience buffer (each experience can only be part of one mini-batch). To specify M, use theMiniBatchSize
option.If the agent contains recurrent neural networks, each mini-batch contains M different sequences. Each sequence contains K consecutive experiences (starting from a randomly sampled experience). To specify K, use the
SequenceLength
option.For each (randomly selected) mini-batch, perform the learning operations described in Mini-Batch Learning Operations.
Mini-Batch Learning Operations
Operations performed for each mini-batch.
Update the critic parameters by minimizing the loss Lcritic across all sampled mini-batch data.
Normalize the advantage values Di based on recent unnormalized advantage values.
If the
NormalizedAdvantageMethod
option is"none"
, do not normalize the advantage values.If the
NormalizedAdvantageMethod
option is"current"
, normalize the advantage values based on the unnormalized advantages in the current mini-batch.If the
NormalizedAdvantageMethod
option is"moving"
, normalize the advantage values based on the unnormalized advantages for the N most recent advantages, including the current advantage value. To specify the window size N, use theAdvantageNormalizingWindow
option.
Update the actor parameters by minimizing the actor loss function Lactor across all sampled mini-batch data.
Here:
Di and Gi are the advantage function and return value for the ith element of the mini-batch, respectively.
π(Ai|Si;θ) is the probability of taking action Ai when in state Si, given the updated policy parameters θ.
π(Ai|Si;θold) is the probability of taking action Ai when in state Si, given the previous policy parameters θold from before the current learning epoch.
ε is the clip factor specified using the
ClipFactor
option.ℋi(θ,Si) is the entropy loss and w is the entropy loss weight factor, specified using the
EntropyLossWeight
option. For more information on entropy loss, see Entropy Loss.
Entropy Loss
To promote agent exploration, you can subtract an entropy loss term wℋi(θ,Si) from the actor loss function, where w is the entropy loss weight and ℋi(θ,Si) is the entropy.
The entropy value is higher when the agent is more uncertain about which action to take next. Therefore, maximizing the entropy loss term (minimizing the negative entropy loss) increases the agent uncertainty, thus encouraging exploration. To promote additional exploration, which can help the agent move out of local optima, you can specify a larger entropy loss weight.
For a discrete action space, the agent uses the following entropy value. In this case, the actor outputs the probability of taking each possible discrete action.
Here:
P is the number of possible discrete actions.
π(Ak|Si;θ) is the probability of taking action Ak when in state Si following the current policy.
For a continuous action space, the agent uses the following entropy value. In this case, the actor outputs the mean and standard deviation of the Gaussian distribution for each continuous action.
Here:
C is the number of continuous actions output by the actor.
σk,i is the standard deviation for action k when in state Si following the current policy.
References
[1] Schulman, John, Filip Wolski, Prafulla Dhariwal, Alec Radford, and Oleg Klimov. “Proximal Policy Optimization Algorithms.” ArXiv:1707.06347 [Cs], July 19, 2017. https://arxiv.org/abs/1707.06347.
[2] Mnih, Volodymyr, Adrià Puigdomènech Badia, Mehdi Mirza, Alex Graves, Timothy P. Lillicrap, Tim Harley, David Silver, and Koray Kavukcuoglu. “Asynchronous Methods for Deep Reinforcement Learning.” ArXiv:1602.01783 [Cs], February 4, 2016. https://arxiv.org/abs/1602.01783.
[3] Schulman, John, Philipp Moritz, Sergey Levine, Michael Jordan, and Pieter Abbeel. “High-Dimensional Continuous Control Using Generalized Advantage Estimation.” ArXiv:1506.02438 [Cs], October 20, 2018. https://arxiv.org/abs/1506.02438.