# rlPrioritizedReplayMemory

Replay memory experience buffer with prioritized sampling

## Description

An off-policy reinforcement learning agent stores experiences in a circular experience buffer. During training, the agent samples mini-batches of experiences from the buffer and uses these mini-batches to update its actor and critic function approximators.

By default, built-in off-policy agents (DQN, DDPG, TD3, SAC, MBPO) use an `rlReplayMemory` object as their experience buffer. Agents uniformly sample data from this buffer. To perform nonuniform prioritized sampling [1], which can improve sample efficiency when training your agent, use an `rlPrioritizedReplayMemory` object. For more information on prioritized sampling, see Algorithms.

## Creation

### Syntax

``buffer = rlPrioritizedReplayMemory(obsInfo,actInfo)``
``buffer = rlPrioritizedReplayMemory(obsInfo,actInfo,maxLength)``

### Description

example

````buffer = rlPrioritizedReplayMemory(obsInfo,actInfo)` creates a prioritized replay memory experience buffer that is compatible with the observation and action specifications in `obsInfo` and `actInfo`, respectively.```
````buffer = rlPrioritizedReplayMemory(obsInfo,actInfo,maxLength)` sets the maximum length of the buffer by setting the `MaxLength` property.```

### Input Arguments

expand all

Observation specifications, specified as a reinforcement learning specification object or an array of specification objects defining properties such as dimensions, data types, and names of the observation signals.

You can extract the observation specifications from an existing environment or agent using `getObservationInfo`. You can also construct the specifications manually using `rlFiniteSetSpec` or `rlNumericSpec`.

Action specifications, specified as a reinforcement learning specification object defining properties such as dimensions, data types, and names of the action signals.

You can extract the action specifications from an existing environment or agent using `getActionInfo`. You can also construct the specification manually using `rlFiniteSetSpec` or `rlNumericSpec`.

## Properties

expand all

Maximum buffer length, specified as a positive integer.

Number of experiences in buffer, specified as a nonnegative integer.

Priority exponent to control the impact of prioritization during probability computation, specified as a nonnegative scalar less than or equal to 1.

If the priority exponent is zero, the agent uses uniform sampling.

Initial value of the importance sampling exponent, specified as a nonnegative scalar less than or equal to 1

Number of annealing steps for updating the importance sampling exponent, specified as a positive integer.

Current value of the importance sampling exponent, specified as a nonnegative scalar less than or equal to 1.

During training, `ImportanceSamplingExponent` is linearly increased from `InitialImportanceSamplingExponent` to 1 over `NumAnnealingSteps` steps.

## Object Functions

 `append` Append experiences to replay memory buffer `sample` Sample experiences from replay memory buffer `resize` Resize replay memory experience buffer `allExperiences` Return all experiences in replay memory buffer `getActionInfo` Obtain action data specifications from reinforcement learning environment, agent, or experience buffer `getObservationInfo` Obtain observation data specifications from reinforcement learning environment, agent, or experience buffer `reset` Reset environment, agent, experience buffer, or policy object

## Examples

collapse all

Create an environment for training the agent. For this example, load a predefined environment.

`env = rlPredefinedEnv("SimplePendulumWithImage-Discrete");`

Extract the observation and action specifications from the agent.

```obsInfo = getObservationInfo(env); actInfo = getActionInfo(env);```

Create a DQN agent from the environment specifications.

`agent = rlDQNAgent(obsInfo,actInfo);`

By default, the agent uses a replay memory experience buffer with uniform sampling.

Replace the default experience buffer with a prioritized replay memory buffer.

`agent.ExperienceBuffer = rlPrioritizedReplayMemory(obsInfo,actInfo);`

Configure the prioritized replay memory options. For example, set the initial importance sampling exponent to `0.5` and the number of annealing steps for updating the exponent during training to `1e4`.

```agent.ExperienceBuffer.NumAnnealingSteps = 1e4; agent.ExperienceBuffer.PriorityExponent = 0.5; agent.ExperienceBuffer.InitialImportanceSamplingExponent = 0.5;```

## Limitations

• Prioritized experience replay does not support agents that use recurrent neural networks.

## Algorithms

Prioritized replay memory samples experiences according to experience priorities. For a given experience, the priority is defined as the absolute value of the associated temporal difference (TD) error. A larger TD error indicates that the critic network is not well-trained for the corresponding experience. Therefore, sampling such experiences during critic updates can help efficiently improve the critic performance, which often improves the sample efficiency of agent training.

When using prioritized replay memory, agents use the following process when sampling a mini-batch of experiences and updating a critic.

1. Compute the sampling probability P for each experience in the buffer based on the experience priority.

`$P\left(j\right)=\frac{p{\left(j\right)}^{\alpha }}{{{\sum }_{i=1}^{N}p\left(i\right)}^{\alpha }}$`

Here:

• N is the number of experiences in the replay memory buffer

• p is the experience priority.

• α is a priority exponent. To set α, use the `PriorityExponent` parameter.

2. Sample a mini-batch of experiences according to the computed probabilities.

3. Compute the importance sampling weights (w) for the sampled experiences.

`$\begin{array}{l}w\text{'}\left(j\right)={\left(N\cdot P\left(j\right)\right)}^{-\beta }\\ w\left(j\right)←\frac{w\text{'}\left(j\right)}{\underset{i\in \text{mini-batch}}{\mathrm{max}}w\text{'}\left(i\right)}\end{array}$`

Here, β is the importance sampling exponent. The `ImportanceSamplingExponent` parameter contains the current value of β. To control β, set the `ImportanceSamplingExponent` and `NumAnnealingSteps` parameters.

4. Compute the weighted loss using the importance sampling weights w and the TD error δ to update a critic

5. Update the priorities of the sampled experiences based on the TD error.

`$p\left(j\right)=|\delta |$`
6. Update the importance sampling exponent β by linearly annealing the exponent value until it reaches 1.

`$\beta ←\beta +\frac{1-{\beta }_{0}}{{N}_{S}}$`

Here:

• β0 is the initial importance sampling exponent. To specify β0, use the `InitialImportanceSamplingExponent` parameter.

• NS is the number of annealing steps. To specify Ns, use the `NumAnnealingSteps` parameter.

## References

[1] Schaul, Tom, John Quan, Ioannis Antonoglou, and David Silver. 'Prioritized experience replay'. arXiv:1511.05952 [Cs] 25 February 2016. https://arxiv.org/abs/1511.05952.

## Version History

Introduced in R2022b