rlSARSAAgentOptions
Options for SARSA agent
Description
Use an rlSARSAAgentOptions
object to specify options for
creating SARSA agents. To create a SARSA agent, use rlSARSAAgent
For more information on SARSA agents, see SARSA Agent.
For more information on the different types of reinforcement learning agents, see Reinforcement Learning Agents.
Creation
Description
creates an
opt
= rlSARSAAgentOptionsrlSARSAAgentOptions
object for use as an argument when creating a SARSA
agent using all default settings. You can modify the object properties using dot
notation.
creates the options set opt
= rlSARSAAgentOptions(Name=Value
)opt
and sets its properties using one
or more name-value arguments. For example,
rlSARSAAgentOptions(DiscountFactor=0.95)
creates an option set with a
discount factor of 0.95
. You can specify multiple name-value
arguments.
Properties
SampleTime
— Sample time of agent
1
(default) | positive scalar | -1
Sample time of the agent, specified as a positive scalar or as -1
.
Within a MATLAB® environment, the agent is executed every time the environment advances,
so, SampleTime
does not affect the timing of the agent
execution.
Within a Simulink® environment, the RL Agent block
that uses the agent object executes every SampleTime
seconds of
simulation time. If SampleTime
is -1
the block
inherits the sample time from its input signals. Set SampleTime
to
-1
when the block is a child of an event-driven subsystem.
Note
Set SampleTime
to a positive scalar when the block is not
a child of an event-driven subsystem. Doing so ensures that the block executes
at appropriate intervals when input signal sample times change due to model
variations.
Regardless of the type of environment, the time interval between consecutive elements
in the output experience returned by sim
or
train
is
always SampleTime
.
If SampleTime
is -1
, for Simulink environments, the time interval between consecutive elements in the
returned output experience reflects the timing of the events that trigger the RL Agent block
execution, while for MATLAB environments, this time interval is considered equal to
1
.
This property is shared between the agent and the agent options object within the agent. Therefore, if you change it in the agent options object, it gets changed in the agent, and vice versa.
Example: SampleTime=-1
DiscountFactor
— Discount factor
0.99
(default) | positive scalar less than or equal to 1
Discount factor applied to future rewards during training, specified as a positive scalar less than or equal to 1.
Example: DiscountFactor=0.9
EpsilonGreedyExploration
— Options for epsilon-greedy exploration
EpsilonGreedyExploration
object
Options for epsilon-greedy exploration, specified as an
EpsilonGreedyExploration
object with the following
properties.
Property | Description | Default Value |
---|---|---|
Epsilon | Probability threshold to either randomly select an action or select the action that maximizes the state-action value function. A larger value of Epsilon means that the agent randomly explores the action space at a higher rate. | 1 |
EpsilonMin | Minimum value of Epsilon | 0.01 |
EpsilonDecay | Decay rate | 0.0050 |
At each interaction with the environment (that is at each training step) if
Epsilon
is greater than EpsilonMin
, then
it is updated using the following formula.
Epsilon = Epsilon*(1-EpsilonDecay)
Note that Epsilon
is conserved between the end of an episode and
the start of the next one. Therefore, it keeps on uniformly decreasing over multiple
episodes until it reaches EpsilonMin
.
If your agent converges on local optima too quickly, you can promote agent exploration by
increasing Epsilon
.
To specify exploration options, use dot notation after creating the rlSARSAAgentOptions
object opt
. For example, set the epsilon value to 0.9
.
opt.EpsilonGreedyExploration.Epsilon = 0.9;
CriticOptimizerOptions
— Critic optimizer options
rlOptimizerOptions
object
Critic optimizer options, specified as an rlOptimizerOptions
object. It allows you to specify training parameters of
the critic approximator such as learning rate, gradient threshold, as well as the
optimizer algorithm and its parameters. For more information, see rlOptimizerOptions
and rlOptimizer
.
Example: CriticOptimizerOptions =
rlOptimizerOptions(LearnRate=5e-3)
InfoToSave
— Options to save additional agent data
structure (default)
Options to save additional agent data, specified as a structure containing the following fields.
Optimizer
PolicyState
You can save an agent object in one of the following ways:
Using the
save
commandSpecifying
saveAgentCriteria
andsaveAgentValue
in anrlTrainingOptions
objectSpecifying an appropriate logging function within a
FileLogger
object.
When you save an agent using any method, the fields in the
InfoToSave
structure determine whether the
corresponding data is saved with the agent. For example, if you set the
Optimizer
field to true
,
then the critic optimizer is saved along with the agent.
You can modify the InfoToSave
property only after the
agent options object is created.
Example: options.InfoToSave.Optimizer=true
Optimizer
— Option to save critic optimizer
false
(default) | true
Option to save the critic optimizer, specified as a
logical value. For example, if you set the
Optimizer
field to
false
, then the critic
optimizer (which is a hidden property of the agent
and can contain internal states) is not saved along
with the agent, therefore saving disk space and
memory. However, when the optimizers contains
internal states, the state of the saved agent is not
identical to the state of the original agent.
Example: true
PolicyState
— Option to save state of explorative policy
false
(default) | true
Option to save the state of the explorative policy,
specified as a logical value. If you set the
PolicyState
field to
false
, then the state of the
explorative policy (which is a hidden agent
property) is not saved along with the agent. In this
case, the state of the saved agent is not identical
to the state of the original agent.
Example: true
Object Functions
rlSARSAAgent | SARSA reinforcement learning agent |
Examples
Create a SARSA Agent Options Object
Create an rlSARSAAgentOptions
object that specifies the agent sample time.
opt = rlSARSAAgentOptions(SampleTime=0.5)
opt = rlSARSAAgentOptions with properties: SampleTime: 0.5000 DiscountFactor: 0.9900 EpsilonGreedyExploration: [1x1 rl.option.EpsilonGreedyExploration] CriticOptimizerOptions: [1x1 rl.option.rlOptimizerOptions] InfoToSave: [1x1 struct]
You can modify options using dot notation. For example, set the agent discount factor to 0.95
.
opt.DiscountFactor = 0.95;
Version History
Introduced in R2019a
Commande MATLAB
Vous avez cliqué sur un lien qui correspond à cette commande MATLAB :
Pour exécuter la commande, saisissez-la dans la fenêtre de commande de MATLAB. Les navigateurs web ne supportent pas les commandes MATLAB.
Select a Web Site
Choose a web site to get translated content where available and see local events and offers. Based on your location, we recommend that you select: .
You can also select a web site from the following list:
How to Get Best Site Performance
Select the China site (in Chinese or English) for best site performance. Other MathWorks country sites are not optimized for visits from your location.
Americas
- América Latina (Español)
- Canada (English)
- United States (English)
Europe
- Belgium (English)
- Denmark (English)
- Deutschland (Deutsch)
- España (Español)
- Finland (English)
- France (Français)
- Ireland (English)
- Italia (Italiano)
- Luxembourg (English)
- Netherlands (English)
- Norway (English)
- Österreich (Deutsch)
- Portugal (English)
- Sweden (English)
- Switzerland
- United Kingdom (English)