Main Content

Soft Actor-Critic (SAC) Agent

The soft actor-critic (SAC) algorithm is an off-policy actor-critic method for environments with discrete, continuous, and hybrid action-spaces. The SAC algorithm attempts to learn the stochastic policy that maximizes a combination of the policy value and its entropy. The policy entropy is a measure of policy uncertainty given the state. A higher entropy value promotes more exploration. Maximizing both the expected discounted cumulative long-term reward and the entropy balances exploration and exploitation of the environment. A soft actor-critic agent uses two critics to estimate the value of the optimal policy, while also featuring target critics and an experience buffer. SAC agents support offline training (training from saved data, without an environment). For more information on the different types of reinforcement learning agents, see Reinforcement Learning Agents.

In Reinforcement Learning Toolbox™, a soft actor-critic agent is implemented by an rlSACAgent object. This implementation uses two Q-value function critics, which prevents overestimation of the value function. Other implementations of the soft actor-critic algorithm use an additional value function critic.

Soft actor-critic agents can be trained in environments with the following observation and action spaces.

Observation SpaceAction Space
Discrete or continuousDiscrete, continuous or hybrid

Note

Soft actor-critic agents with an hybrid action space do not support training with an evolutionary strategy. Also, they cannot be used to build model based agents. Finally, while you can train offline (from existing data) any SAC agent, only SAC agents with continuous action space support batch data regularizer options.

Soft actor-critic agents use the following actor and critic. In the most general case, for hybrid action spaces, the action A has a discrete part Ad and a continuous part Ac.

CriticsActor

Q-value function critics Q(S,A), which you create using rlQValueFunction (for continuous action spaces) or rlVectorQValueFunction (for discrete action spaces)

Stochastic policy actor π(A|S), which you create using rlDiscreteCategoricalActor (for discrete action spaces), rlContinuousGaussianActor (for continuous action spaces) or rlHybridStochasticActor for hybrid action spaces

During training, a soft actor-critic agent:

  • Updates the actor and critic learnable parameters at regular intervals during learning.

  • Estimates the probability distribution of the action and randomly selects an action based on the distribution.

  • Updates an entropy weight term to reduce the difference between entropy and target entropy.

  • Stores past experience using a circular experience buffer. The agent updates the actor and critic using a mini-batch of experiences randomly sampled from the buffer.

If the UseExplorationPolicy option of the agent is set to false, the action with maximum likelihood is always used in sim and generatePolicyFunction. As a result, the simulated agent and generated policy behave deterministically.

If the UseExplorationPolicy is set to true, the agent selects its actions by sampling its probability distribution. As a result, the policy is stochastic and the agent explores its observation space.

This option affects only simulation and deployment; it does not affect training.

Actor and Critic Function Approximators

To estimate the policy and value function, a soft actor-critic agent maintains the following function approximators.

  • Stochastic actor π(A|S;θ).

    For continuous-only action spaces, the actor outputs a vector containing the mean and standard deviation of the Gaussian distribution for the continuous part of the action. Note the SAC algorithm bounds the continuous action selected from the actor.

    For discrete-only action spaces, the actor outputs a vector containing the probabilities of each possible discrete action.

    For hybrid action spaces, the actor outputs both these vectors.

    Both distributions are parameterized in θ and conditional to the observation being S.

  • One or two Q-value (or vector Q-value) critics Qk(S,Ac;ϕk) — The critics, each with parameters ϕk, take observation S and the continuous part of the action Ac (if present) as inputs and return the corresponding value function (for continuous action spaces), or the value of each possible discrete action Ad (for discrete or hybrid action spaces). The value function is calculated including the entropy of the policy as well as its expected discounted cumulative long-term reward.

  • One or two target critics Qtk(S,Ac;ϕtk) — To improve the stability of the optimization, the agent periodically sets the target critic parameters ϕtk to the latest corresponding critic parameter values. The number of target critics matches the number of critics.

When you use two critics, Q1(S,Ac;ϕ1) and Q2(S,Ac;ϕ2), each critic can have different structures. When the critics have the same structure, they must have different initial parameter values.

Each critic Qk(S,Ac;ϕk) and corresponding target critic Qtk(S,Ac;ϕtk) must have the same structure and parameterization.

For more information on creating actors and critics for function approximation, see Create Policies and Value Functions.

During training, the agent tunes the parameter values in θ. After training, the parameters remain at their tuned value and the trained actor function approximator is stored in π(A|S).

Continuous Action Generation

In a continuous action space soft actor-critic agent, the neural network in the actor takes the current observation and generates two outputs, one for the mean and the other for the standard deviation. To select an action, the actor randomly selects an unbounded action from this Gaussian distribution. If the soft actor-critic agent needs to generate bounded actions, the actor applies tanh and scaling operations to the action sampled from the Gaussian distribution.

During training, the agent uses the unbounded Gaussian distribution to calculate the entropy of the policy for the given observation.

Generation of a bounded action from an unbounded action randomly selected from the Gaussian distribution returned by the network.

Discrete Action Generation

In a discrete action space soft actor-critic agent, the actor takes the current observation and generates a categorical distribution, in which each possible action is associated with a probability. Since each action that belongs to the finite set is already assumed feasible, no bounding is needed.

During training, the agent uses the categorical distribution to calculate the entropy of the policy for the given observation.

Hybrid Action Generation

In a hybrid action space soft actor-critic agent, the actor takes the current observation and generates both a categorical and a Gaussian distribution, which are both used to calculate the entropy of the policy during training.

A discrete action is then sampled from the categorical distribution, and a continuous action is sampled from the Gaussian distribution. If needed, the continuous action is then also automatically bounded as for continuous action generation.

The discrete and continuous actions are then returned to the environment using two different action channels.

Agent Creation

You can create and train soft actor-critic agents at the MATLAB® command line or using the Reinforcement Learning Designer app. For more information on creating agents using Reinforcement Learning Designer, see Create Agents Using Reinforcement Learning Designer.

At the command line, you can create a soft actor-critic agent with default actor and critic based on the observation and action specifications from the environment. To do so, perform the following steps.

  1. Create observation specifications for your environment. If you already have an environment object, you can obtain these specifications using getObservationInfo.

  2. Create action specifications for your environment. If you already have an environment object, you can obtain these specifications using getActionInfo.

  3. If needed, specify the number of neurons in each learnable layer of the default network or whether to use a recurrent default network. To do so, create an agent initialization option object using rlAgentInitializationOptions.

  4. If needed, specify agent options using an rlSACAgentOptions object (alternatively, you can skip this step and then modify the agent options later using dot notation).

  5. Create the agent using an rlSACAgent object.

Alternatively, you can create your own actor and critic objects and use them to create your agent. In this case, ensure that the input and output dimensions of the actor and critic match the corresponding action and observation specifications of the environment. To create an agent using your custom actor and critic objects, perform the following steps.

  1. Create a stochastic actor using an rlContinuousGaussianActor object (for continuous action spaces), an rlDiscreteCategoricalActor (for discrete action spaces), or an rlHybridStochasticActor (for hybrid action spaces). For soft actor-critic agents with continuous or hybrid action spaces, the actor network must not contain a tanhLayer and scalingLayer as last two layers in the output path for the mean values, since the scaling already occurs automatically. However, to ensure that the standard deviation values are not negative, the actor network must contain a reluLayer as the last layer in the output path for the standard deviation values.

  2. Create one or two critics using rlQValueFunction objects (for continuous action spaces) or using rlVectorQValueFunction objects (for hybrid or discrete action spaces). For hybrid action spaces, the critics must take as inputs both the observation and the continuous action. If the critics have the same structure, they must have different initial parameter values.

  3. Specify agent options using an rlSACAgentOptions object (alternatively, you can skip this step and then modify the agent options later using dot notation).

  4. Create the agent using an rlSACAgent object.

For more information on creating actors and critics for function approximation, see Create Policies and Value Functions.

Training Algorithm

The soft actor-critic agent uses the following training algorithm, in which it periodically updates the actor and critic models and entropy weight. To configure the training algorithm, specify options using an rlSACAgentOptions object. Here, K = 2 is the number of critics and k is the critic index.

  • Initialize each critic Qk(S,A;ϕk) with random parameter values ϕk, and initialize each target critic with the same random parameter values, ϕtk=ϕk.

  • Initialize the actor π(A|S;θ) with random parameter values θ.

  • Perform a warm start by taking a sequence of actions following the initial random policy in π(A|S). For each action, store the experience (S,A,R,S') in the experience buffer. To specify the size of the experience buffer, use the ExperienceBufferLength option in the agent rlSACAgentOptions object. To specify the number of warm up actions, use the NumWarmStartSteps option.

  • For each training time step:

    1. For the current observation S, select the action A (with its continuous part bounded) using the policy in π(A|S;θ).

    2. Execute action A. Observe the reward R and next observation S'.

    3. Store the experience (S,A,R,S') in the experience buffer.

    4. Every DC time steps (to specify DC use the LearningFrequency option), perform the following operations. For each epoch (to specify the number of epochs, use the NumEpoch option), perform the following two operations:

      1. Create at most B different mini-batches. To specify B, use the MaxMiniBatchPerEpoch option. Each mini-batch contains M different (typically nonconsecutive) experiences (Si,Ai,Ri,S'i) that are randomly sampled from the experience buffer (each experience can only be part of one mini-batch). To specify M, use the MiniBatchSize option.

        If the agent contains recurrent neural networks, each mini-batch contains M different sequences. Each sequence contains K consecutive experiences (starting from a randomly sampled experience). To specify K, use the SequenceLenght option.

      2. For each mini-batch, perform the learning operations described in Mini-Batch Learning Operations.

Mini-Batch Learning Operations

Operations performed for each mini-batch.

  1. Update the parameters of each critic by minimizing the loss Lk across all sampled experiences.

    Lk=12Mi=1M(yiQk(Si,Ai;ϕk))2

    To specify the optimizer options used to minimize Lk, use the options contained in the CriticOptimizerOptions option (which in turn contains an rlOptimizerOptions object).

    If the agent contains recurrent neural networks, each element of the sum over the batch elements is itself a sum over the time (sequence) dimension.

    If S'i is a terminal state, the value function target yi is set equal to the experience reward Ri. Otherwise, the value function target is the sum of Ri, the minimum discounted future reward from the critics, and the weighted entropy. The following formulas show the value function target in discrete, continuous, and hybrid action spaces, respectively.

    yid=Ri+γmink(j=1Ndπd(Ajd'|Si';θ)Qtk(Si',Ajd';ϕtk))αdj=1Ndπd(Ajd'|Si';θ)lnπd(Ajd'|Si';θ)yic=Ri+γmink(Qtk(Si',Aic';ϕtk))αclnπc(Aic'|Si';θ)yih=Ri+γmink(j=1Ndπd(Ajd'|Si';θ)Qtk(Si',Ajd',Aic';ϕtk))αdj=1Ndπd(Ajd'|Si';θ)lnπd(Ajd'|Si';θ)αclnπc(Aic'|Si';θ)

    Here:

    • The superscripts d, c, and h indicate the quantity in the discrete, continuous, and hybrid cases, respectively. Nd is the number of possible discrete actions, and Adj indicates the jth action belonging to the discrete action set.

    • γ is the discount factor, which you specify in the DiscountFactor option.

    • The last two terms of the target equation for the hybrid case, (or the last term in the other cases) represent the weighted policy entropy for the output of the actor when in state S. αd and αc are the entropy loss weights for the discrete and continuous action spaces, which you specify by setting the EntropyWeight option of the respective EntropyWeightOptions property. To specify the other optimizer options used to tune one of the entropy term, use the other properties of the EntropyWeightOptions agent option.

    If you specify a value of NumStepsToLookAhead equal to N, then the N-step return (which adds the rewards of the following N steps and the discounted estimated value of the state that caused the N-th reward) is used to calculate the target yi.

  2. At every critic update, update the target critics depending on the target update method. For more information, see Target Update Methods.

  3. Every DA critic updates (to set DA, use both the LearningFrequency and the PolicyUpdateFrequency options), perform the following two operations:

    1. Update the parameters of the actor by minimizing the following objective function across all sampled experiences. The following formulas show the objective function in discrete, continuous, and hybrid action spaces, respectively.

      Jπd=1Mi=1M(mink(j=1Ndπd(Ajd|Si;θ)Qtk(Si,Ajd;ϕtk))+αdj=1Ndπd(Ajd|Si;θ)lnπd(Ajd|Si;θ))Jπc=1Mi=1M(mink(Qk(Si,Aic;ϕk))+αclnπc(Aic|Si;θ))Jπh=1Mi=1M(mink(j=1Ndπd(Ajd|Si;θ)Qtk(Si,Ajd,Aic;ϕtk))+αdj=1Ndπd(Ajd|Si;θ)lnπd(Ajd|Si;θ)+αclnπc(Aic|Si;θ))

      To specify the optimizer options used to minimize Jπ, use the options contained in the ActorOptimizerOptions option (which in turn contains an rlOptimizerOptions object).

      If the agent contains recurrent neural networks, each element of the sum over the mini-batch elements is itself a sum over the time (sequence) dimension.

    2. Update the entropy weights by minimizing the following loss functions. When the action space is discrete or continuous, only the respective entropy weight is minimized. When the action space is hybrid, both weights are updated by minimizing both functions.

      Lαd=1Mi=1M(αdj=1Ndπd(Ajd|Si;θ)lnπd(Ajd|Si;θ)αdd)Lαc=1Mi=1M(αclnπc(Aic|Si;θ)αcc)

      Here, d and c are the target entropies for the discrete and continuous cases, which you specify in the TargetEntropy property of the corresponding EntropyWeightOptions.TargetEntropy option.

Target Update Methods

Soft actor-critic agents update their target critic parameters using one of the following target update methods.

  • Smoothing — Update the target critic parameters using smoothing factor τ. To specify the smoothing factor, use the TargetSmoothFactor option.

    ϕtk=τϕk+(1τ)ϕtk

  • Periodic — Update the target critic parameters periodically without smoothing (TargetSmoothFactor = 1). To specify the update period, use the TargetUpdateFrequency parameter.

    ϕtk=ϕk

  • Periodic smoothing — Update the target parameters periodically with smoothing.

To configure the target update method, set the TargetUpdateFrequency and TargetSmoothFactor parameters as shown in the following table.

Update MethodTargetUpdateFrequencyTargetSmoothFactor
Smoothing (default)1Less than 1
PeriodicGreater than 11
Periodic smoothingGreater than 1Less than 1

References

[1] Haarnoja, Tuomas, Aurick Zhou, Kristian Hartikainen, George Tucker, Sehoon Ha, Jie Tan, Vikash Kumar, et al. "Soft Actor-Critic Algorithms and Application." Preprint, submitted January 29, 2019. https://arxiv.org/abs/1812.05905.

See Also

Functions

Objects

Related Examples

More About