rlRepresentation
(Not recommended) Model representation for reinforcement learning agents
Since R2019a
rlRepresentation
is not recommended. Use rlValueRepresentation
,
rlQValueRepresentation
,
rlDeterministicActorRepresentation
, or rlStochasticActorRepresentation
instead. For more information, see Compatibility Considerations.
Syntax
Description
Use rlRepresentation
to create a function approximator
representation for the actor or critic of a reinforcement learning agent. To do so, you
specify the observation and action signals for the training environment and options that
affect the training of an agent that uses the representation. For more information on creating
representations, see Create Policies and Value Functions.
creates a representation for the deep neural network rep
= rlRepresentation(net
,obsInfo
,'Observation',obsNames
)net
. The
observation names obsNames
are the network input layer names.
obsInfo
contains the corresponding observation specifications for
the training environment. Use this syntax to create a representation for a critic that
does not require action inputs, such as a critic for an rlACAgent
or
rlPGAgent
agent.
creates a representation with action signals specified by the names
rep
= rlRepresentation(net
,obsInfo
,actInfo
,'Observation',obsNames
,'Action',actNames
)actNames
and specification actInfo
. Use this
syntax to create a representation for any actor, or for a critic that takes both
observation and action as input, such as a critic for an rlDQNAgent
or
rlDDPGAgent
agent.
creates a critic representation for the value table or Q table tableCritic
= rlRepresentation(tab
)tab
.
When you create a table representation, you specify the observation and action
specifications when you create tab
.
creates a linear basis function representation using the handle to a custom basis function
critic
= rlRepresentation(basisFcn
,W0
,obsInfo
)basisFcn
and initial weight vector W0
.
obsInfo
contains the corresponding observation specifications for
the training environment. Use this syntax to create a representation for a critic that
does not require action inputs, such as a critic for an rlACAgent
or
rlPGAgent
agent.
creates a linear basis function representation using the specification cell array
critic
= rlRepresentation(basisFcn
,W0
,oaInfo
)oaInfo
, where oaInfo
=
{obsInfo,actInfo}
. Use this syntax to create a representation for a
critic that takes both observations and actions as inputs, such as a critic for an
rlDQNAgent
or
rlDDPGAgent
agent.
creates a linear basis function representation using the specified observation and action
specifications, actor
= rlRepresentation(basisFcn
,W0
,obsInfo
,actInfo
)obsInfo
and actInfo
,
respectively. Use this syntax to create a representation for an actor that takes
observations as inputs and generates actions.
creates a representation using additional options that specify learning parameters for the
representation when you train an agent. Available options include the optimizer used for
training and the learning rate. Use rep
= rlRepresentation(___,repOpts
)rlRepresentationOptions
to create the options set
repOpts
. You can use this syntax with any of the previous
input-argument combinations.