Assume that you have an existing trained reinforcement learning agent. For this example, load the trained agent from Compare DDPG Agent to LQR Controller.
load("DoubleIntegDDPG.mat","agent")
Obtain the critic function approximator from the agent.
Assume that you have an existing trained reinforcement learning agent. For this example, load the trained agent from Compare DDPG Agent to LQR Controller.
load("DoubleIntegDDPG.mat","agent")
Obtain the actor function approximator from the agent.
actor = getActor(agent);
Obtain the learnable parameters from the actor.
params = getLearnableParameters(actor)
params=2×1 cell array
{[-15.4601 -7.2076]}
{[ 0]}
Modify the parameter values. For this example, simply multiply all of the parameters by 2.
Learnable parameter values for the function object, returned as a cell array. You
can modify these parameter values and set them in the original agent or a different
agent using the setLearnableParameters function.
R2022a: getLearnableParameters now uses approximator objects instead of representation objects
Using representation objects to create actors and critics for reinforcement learning
agents is no longer recommended. Therefore, getLearnableParameters now
uses function approximator objects instead.
R2020a: getLearnableParameterValues is now getLearnableParameters
getLearnableParameterValues is now
getLearnableParameters. To update your code, change the function name
from getLearnableParameterValues to
getLearnableParameters. The syntaxes are equivalent.
You can also select a web site from the following list:
How to Get Best Site Performance
Select the China site (in Chinese or English) for best site performance. Other MathWorks country sites are not optimized for visits from your location.