rlSARSAAgentOptions
Options for SARSA agent
Description
Use an rlSARSAAgentOptions
object to specify options when
creating a SARSA agent. To create a SARSA agent, use rlSARSAAgent
For more information on SARSA agents, see SARSA Agent.
For more information on the different types of reinforcement learning agents, see Reinforcement Learning Agents.
Creation
Description
creates an
opt
= rlSARSAAgentOptionsrlSARSAAgentOptions
object for use as an argument when creating a SARSA
agent using all default settings. You can modify the object properties using dot
notation.
creates the options object opt
= rlSARSAAgentOptions(Name=Value
)opt
and sets its properties using one
or more name-value arguments. For example,
rlSARSAAgentOptions(DiscountFactor=0.95)
creates an option set with a
discount factor of 0.95
. You can specify multiple name-value
arguments.
Properties
Sample time of the agent, specified as a positive scalar or as -1
.
Within a MATLAB® environment, the agent is executed every time the environment advances,
so, SampleTime
does not affect the timing of the agent execution.
If SampleTime
is set to -1
, in MATLAB environments, the time interval between consecutive elements in the
returned output experience is considered equal to 1
.
Within a Simulink® environment, the RL Agent block
that uses the agent object executes every SampleTime
seconds of
simulation time. If SampleTime
is set to -1
the
block inherits the sample time from its input signals. Set
SampleTime
to -1
when the block is a child
of an event-driven subsystem.
Set SampleTime
to a positive scalar when the block is not a child
of an event-driven subsystem. Doing so ensures that the block executes at appropriate
intervals when input signal sample times change due to model variations. If
SampleTime
is a positive scalar, this value is also the time
interval between consecutive elements in the output experience returned by sim
or
train
,
regardless of the type of environment.
If SampleTime
is set to -1
, in Simulink environments, the time interval between consecutive elements in the
returned output experience reflects the timing of the events that trigger the RL Agent block
execution.
This property is shared between the agent and the agent options object within the agent. If you change this property in the agent options object, it also changes in the agent, and vice versa.
Example: SampleTime=-1
Discount factor applied to future rewards during training, specified as a nonnegative scalar less than or equal to 1.
Example: DiscountFactor=0.9
Options for epsilon-greedy exploration, specified as an
EpsilonGreedyExploration
object with these properties.
Property | Description | Default Value |
---|---|---|
Epsilon | Initial value of the probability threshold to either randomly select an action or select the
action that maximizes the state-action value function. A larger
Epsilon value means that the agent randomly
explores the action space at a higher rate. | 1 |
EpsilonMin | Minimum value of Epsilon | 0.01 |
EpsilonDecay | Decay rate | 0.0050 |
At each interaction with the environment (that is, at each training step), if
Epsilon
is greater than EpsilonMin
, then
it is updated using this formula.
Epsilon = Epsilon*(1-EpsilonDecay)
Epsilon
is conserved between the end of an episode and the start
of the next one. So, Epsilon
decreases uniformly over multiple
episodes until it reaches EpsilonMin
.
If your agent converges on a local optimum too quickly, you can promote agent exploration by
increasing the value of Epsilon
.
To specify exploration options, use dot notation after creating the rlSARSAAgentOptions
object opt
. For example, set the
initial epsilon value to 0.9
.
opt.EpsilonGreedyExploration.Epsilon = 0.9;
Note
The Epsilon
property of an
EpsilonGreedyExploration
object represents the
initial value of Epsilon
at the
beginning of the first episode.
Critic optimizer options, specified as an rlOptimizerOptions
object. It allows you to specify training parameters of
the critic approximator such as learning rate, gradient threshold, as well as the
optimizer algorithm and its parameters. For more information, see rlOptimizerOptions
and rlOptimizer
.
Example: CriticOptimizerOptions =
rlOptimizerOptions(LearnRate=5e-3)
Options to save additional agent data, specified as a structure containing the following fields.
Optimizer
PolicyState
You can save an agent object using one of these methods:
Use the
save
command.Specify
saveAgentCriteria
andsaveAgentValue
in anrlTrainingOptions
object.Specify an appropriate logging function within a
FileLogger
object.
When you save an agent using any method, the fields in the
InfoToSave
structure determine whether the
corresponding data saves with the agent. For example, if you set the
PolicyState
field to true
,
then the policy state saves along with the agent.
You can modify the InfoToSave
property only after you
create the agent options object.
Example: options.InfoToSave.Optimizer=true
Option to save the critic optimizer, specified as a
logical value. For example, if you set the
Optimizer
field to
false
, then the critic
optimizer (which is a hidden property of the agent
and can contain internal states) is not saved along
with the agent, therefore saving disk space and
memory. However, when the optimizers contains
internal states, the state of the saved agent is not
identical to the state of the original agent.
Example: true
Option to save the state of the explorative policy,
specified as a logical value. If you set the
PolicyState
field to
false
, then the state of the
explorative policy (which is a hidden agent
property) is not saved along with the agent. In this
case, the state of the saved agent is not identical
to the state of the original agent.
Example: true
Object Functions
rlSARSAAgent | SARSA reinforcement learning agent |
Examples
Create an rlSARSAAgentOptions
object that specifies the agent sample time.
opt = rlSARSAAgentOptions(SampleTime=0.5)
opt = rlSARSAAgentOptions with properties: SampleTime: 0.5000 DiscountFactor: 0.9900 EpsilonGreedyExploration: [1×1 rl.option.EpsilonGreedyExploration] CriticOptimizerOptions: [1×1 rl.option.rlOptimizerOptions] InfoToSave: [1×1 struct]
You can modify options using dot notation. For example, set the agent discount factor to 0.95
.
opt.DiscountFactor = 0.95;
Version History
Introduced in R2019a
MATLAB Command
You clicked a link that corresponds to this MATLAB command:
Run the command by entering it in the MATLAB Command Window. Web browsers do not support MATLAB commands.
Sélectionner un site web
Choisissez un site web pour accéder au contenu traduit dans votre langue (lorsqu'il est disponible) et voir les événements et les offres locales. D’après votre position, nous vous recommandons de sélectionner la région suivante : .
Vous pouvez également sélectionner un site web dans la liste suivante :
Comment optimiser les performances du site
Pour optimiser les performances du site, sélectionnez la région Chine (en chinois ou en anglais). Les sites de MathWorks pour les autres pays ne sont pas optimisés pour les visites provenant de votre région.
Amériques
- América Latina (Español)
- Canada (English)
- United States (English)
Europe
- Belgium (English)
- Denmark (English)
- Deutschland (Deutsch)
- España (Español)
- Finland (English)
- France (Français)
- Ireland (English)
- Italia (Italiano)
- Luxembourg (English)
- Netherlands (English)
- Norway (English)
- Österreich (Deutsch)
- Portugal (English)
- Sweden (English)
- Switzerland
- United Kingdom (English)