Get critic representation from reinforcement learning agent



critic = getCritic(agent) returns the critic representation object for the specified reinforcement learning agent.


collapse all

Assume that you have an existing trained reinforcement learning agent. For this example, load the trained agent from Train DDPG Agent to Control Double Integrator System.


Obtain the critic representation from the agent.

critic = getCritic(agent);

Obtain the learnable parameters from the critic.

params = getLearnableParameters(critic);

Modify the parameter values. For this example, simply multiply all of the parameters by 2.

modifiedParams = cellfun(@(x) x*2,params,'UniformOutput',false);

Set the parameter values of the critic to the new modified values.

critic = setLearnableParameters(critic,modifiedParams);

Set the critic in the agent to the new modified critic.

agent = setCritic(agent,critic);

Input Arguments

collapse all

Reinforcement learning agent that contains a critic representation, specified as one of the following:

Output Arguments

collapse all

Critic representation object, returned as one of the following:

  • rlValueRepresentation object — Returned when agent is an rlACAgent, rlPGAgent, or rlPPOAgent object

  • rlQValueRepresentation object — Returned when agent is an rlQAgent, rlSARSAAgent, rlDQNAgent, rlDDPGAgent, or rlTD3Agent object with a single critic

  • Two-element row vector of rlQValueRepresentation objects — Returned when agent is an rlTD3Agent object with two critics

Introduced in R2019a