How do I properly substitute rlRepresentation with rlValueRepresentation, rlQValueRepresentation, rlDeterministicActorRepresentation, and rlStochasticActorRepresentation?
Afficher commentaires plus anciens
I am using MATLAB r2020a where rlRepresentation is "not recommended." As a result, I am forced to substitute it with the criritics or actors in the following compatibility guide (https://www.mathworks.com/help/reinforcement-learning/ref/rlrepresentation.html#mw_a6277225-fecf-4d97-9549-1fc4799bf5b6). I tried replacing rlRepresentation with rlValueRepresentation, rlQValueRepresentation, rlDeterministicActorRepresentation, and rlStochasticActorRepresentation (though I left rlRepresentationOptions as is where it came up). They all resulted in errors, and rlValueRepresentation and rlStochasticActorRepresentation had the fewest (and the same) errors:
Error using rlStochasticActorRepresentation (line 93)
Too many input arguments.
Error in createDDPGNetworks (line 51)
critic = rlStochasticActorRepresentation (criticNetwork,criticOptions, ...
Since both this critic and actor have the same error, I think it might have something to do with rlRepresentationOptions since it gives properties to the actors or critics (as far as I understand).
For reference, I am trying to emulate this project (https://www.youtube.com/watch?v=6DL5M9b2j6I) in MATLAB r2020a.
Any help is appreciated.
2 commentaires
Giampiero Campa
le 4 Sep 2020
The table here might help too:
Salma Khaled
le 5 Août 2021
Have you reached a solution?
Réponses (4)
ali
le 14 Nov 2023
2 votes
Hi, I have the same problem. you must use "rlQValueRepresentation" for critic and "rlDeterministicActorRepresentation" for the actor. Also, the option for each network must be in last option of the function.
If you are using RL in biped robot example, change the line51 with the below code:
critic = rlQValueRepresentation(criticNetwork,env.getObservationInfo,...
env.getActionInfo,'Observation',{'observation'},...
'Action',{'action'},criticOptions);
and line 88 for actor:
actor = rlDeterministicActorRepresentation(actorNetwork,env.getObservationInfo,...
env.getActionInfo,'Observation',{'observation'},...
'Action',{'ActorTanh1'},actorOptions);
1 commentaire
RuiFan
le 18 Déc 2024
Thanks! Problem solved
Emmanouil Tzorakoleftherakis
le 17 Juil 2020
0 votes
It would be helpful if you pasted the exact MATLAB code you are typing to see what the problem is. I suspect you simply changed the method name, which is why you get the error you are seeing. Have a look at the documentation page for the respective method you want to use (rlValueRepresentation etc) and make sure the order and number of arguments matches the doc.
Salma Khaled
le 5 Août 2021
0 votes
- Create an actor using an rlDeterministicActorRepresentation object.
- Create a critic using an rlQValueRepresentation object
https://www.mathworks.com/help/reinforcement-learning/ug/ddpg-agents.html
Giampiero Campa
le 6 Août 2021
Modifié(e) : Giampiero Campa
le 6 Août 2021
0 votes
Catégories
En savoir plus sur Sensors dans Centre d'aide et File Exchange
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!