How to extract the trained actor network from the trained agent in Matlab environment? (Reinforcement Learning Toolbox)
4 vues (au cours des 30 derniers jours)
Afficher commentaires plus anciens
wujianfa93
le 2 Juin 2020
Réponse apportée : Anh Tran
le 5 Juin 2020
When the agent is successfully trained using DDPG in Matlab environment, if I want to verify the agent, the following codes should be executed according to the tutorial of MathWorks:
simOptions = rlSimulationOptions('MaxSteps',50);
experience = sim(env,agent,simOptions);
Unfortunately, it is not flexible enough for my program. I hope I can extract the trained actor network from the trained agent so that I can obtain the actions by directly inputting the observation vector to the actor network in each sampling step of my robot program for more complex tasks. However, I can’t seem to find the trained actor network from the following variables in the workspace:
![](https://www.mathworks.com/matlabcentral/answers/uploaded_files/308127/image.png)
Is there a way to extract the trained actor network? If so, how to call the extracted actor network (e.g., what are the I/O formats of the network)?
0 commentaires
Réponse acceptée
Anh Tran
le 5 Juin 2020
You can collect the actor (or policy) from the trained agent with getActor. Then, you can use the actor to predict the best action from an observation wtih getAction.
% get actor representation
actor = getActor(agent);
% actor predicts an action given an observation
action = getAction(actor, observation)
0 commentaires
Plus de réponses (0)
Voir également
Catégories
En savoir plus sur Policies and Value Functions dans Help Center et File Exchange
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!