getLearnableParameters
Obtain learnable parameter values from agent, function approximator, or policy object
Syntax
Description
Agent
Actor or Critic
Examples
Modify Critic Parameter Values
Assume that you have an existing trained reinforcement learning agent. For this example, load the trained agent from Train DDPG Agent to Control Double Integrator System.
load('DoubleIntegDDPG.mat','agent')
Obtain the critic function approximator from the agent.
critic = getCritic(agent);
Obtain the learnable parameters from the critic.
params = getLearnableParameters(critic)
params=2×1 cell array
{[-4.9889 -1.5548 -0.3434 -0.1111 -0.0500 -0.0035]}
{[ 0]}
Modify the parameter values. For this example, simply multiply all of the parameters by 2
.
modifiedParams = cellfun(@(x) x*2,params,'UniformOutput',false);
Set the parameter values of the critic to the new modified values.
critic = setLearnableParameters(critic,modifiedParams);
Set the critic in the agent to the new modified critic.
setCritic(agent,critic);
Display the new parameter values.
getLearnableParameters(getCritic(agent))
ans=2×1 cell array
{[-9.9778 -3.1095 -0.6867 -0.2223 -0.1000 -0.0069]}
{[ 0]}
Modify Actor Parameter Values
Assume that you have an existing trained reinforcement learning agent. For this example, load the trained agent from Train DDPG Agent to Control Double Integrator System.
load('DoubleIntegDDPG.mat','agent')
Obtain the actor function approximator from the agent.
actor = getActor(agent);
Obtain the learnable parameters from the actor.
params = getLearnableParameters(actor)
params=2×1 cell array
{[-15.4622 -7.2252]}
{[ 0]}
Modify the parameter values. For this example, simply multiply all of the parameters by 2
.
modifiedParams = cellfun(@(x) x*2,params,'UniformOutput',false);
Set the parameter values of the actor to the new modified values.
actor = setLearnableParameters(actor,modifiedParams);
Set the actor in the agent to the new modified actor.
setActor(agent,actor);
Display the new parameter values.
getLearnableParameters(getActor(agent))
ans=2×1 cell array
{[-30.9244 -14.4504]}
{[ 0]}
Input Arguments
agent
— Reinforcement learning agent
reinforcement learning agent object
Reinforcement learning agent, specified as one of the following objects:
Custom agent — For more information, see Create Custom Reinforcement Learning Agents.
fcnAppx
— Actor or critic function object
rlValueFunction
object | rlQValueFunction
object | rlVectorQValueFunction
object | rlContinuousDeterministicActor
object | rlDiscreteCategoricalActor
object | rlContinuousGaussianActor
object
Actor or critic function object, specified as one of the following:
rlValueFunction
object — Value function criticrlQValueFunction
object — Q-value function criticrlVectorQValueFunction
object — Multi-output Q-value function critic with a discrete action spacerlContinuousDeterministicActor
object — Deterministic policy actor with a continuous action spacerlDiscreteCategoricalActor
— Stochastic policy actor with a discrete action spacerlContinuousGaussianActor
object — Stochastic policy actor with a continuous action space
To create an actor or critic function object, use one of the following methods.
policy
— Reinforcement learning policy
rlMaxQPolicy
| rlEpsilonGreedyPolicy
| rlDeterministicActorPolicy
| rlAdditiveNoisePolicy
| rlStochasticActorPolicy
Reinforcement learning policy, specified as one of the following objects:
Output Arguments
pars
— Learnable parameters
rlValueFunction
object | rlQValueFunction
object | rlVectorQValueFunction
object | rlContinuousDeterministicActor
object | rlDiscreteCategoricalActor
object | rlContinuousGaussianActor
object
Learnable parameter values for the function object, returned as a cell array. You
can modify these parameter values and set them in the original agent or a different
agent using the setLearnableParameters
function.
Version History
Introduced in R2019aR2022a: getLearnableParameters
now uses approximator objects instead of representation objects
Using representation objects to create actors and critics for reinforcement learning
agents is no longer recommended. Therefore, getLearnableParameters
now
uses function approximator objects instead.
R2020a: getLearnableParameterValues
is now getLearnableParameters
getLearnableParameterValues
is now
getLearnableParameters
. To update your code, change the function name
from getLearnableParameterValues
to
getLearnableParameters
. The syntaxes are equivalent.
Ouvrir l'exemple
Vous possédez une version modifiée de cet exemple. Souhaitez-vous ouvrir cet exemple avec vos modifications ?
Commande MATLAB
Vous avez cliqué sur un lien qui correspond à cette commande MATLAB :
Pour exécuter la commande, saisissez-la dans la fenêtre de commande de MATLAB. Les navigateurs web ne supportent pas les commandes MATLAB.
Select a Web Site
Choose a web site to get translated content where available and see local events and offers. Based on your location, we recommend that you select: .
You can also select a web site from the following list:
How to Get Best Site Performance
Select the China site (in Chinese or English) for best site performance. Other MathWorks country sites are not optimized for visits from your location.
Americas
- América Latina (Español)
- Canada (English)
- United States (English)
Europe
- Belgium (English)
- Denmark (English)
- Deutschland (Deutsch)
- España (Español)
- Finland (English)
- France (Français)
- Ireland (English)
- Italia (Italiano)
- Luxembourg (English)
- Netherlands (English)
- Norway (English)
- Österreich (Deutsch)
- Portugal (English)
- Sweden (English)
- Switzerland
- United Kingdom (English)