Plotting a custom reinforcement learning environment template
5 vues (au cours des 30 derniers jours)
Afficher commentaires plus anciens
Hi there,
I am trying to create a custom Reinforcement Learning environment similar to the CartPole-Discrete in RL toolbox, except for a few tweaks to parameters like maximum force, threshold angle, etc.
I type
rlCreateEnvTemplate("CartPole_Environment")
and changed the value of MaxForce to 20 (supposedly). I then saved the file with the same name CartPole_Environment.
Now when I type the following comands in the command window, I am not getting the plot of the environment as I did with CartPole-Discrete.
env = CartPole_Environment;
validateEnvironment(env);
plot(env);
I would really be grateful if someone could help me with this!
0 commentaires
Réponses (1)
Akshat
le 12 Jan 2024
Hi Arjun,
I understand you want to plot the "CartPole_Environment" that you have modified at your end.
As per what I see in the code file when I create the template file using the following command,
rlCreateEnvTemplate("CartPole_Environment");
in the file generated, the "plot" method looks something like this:
% (optional) Visualization method
function plot(this)
% Initiate the visualization
% Update the visualization
envUpdatedCallback(this)
end
As you can see, it states that you will need to specify what exactly you want to plot. So I would suggest trying out filling this plot method with whatever it is you want to visualise, if the problem still persists, feel free to reach out here with the error/issue you are facing.
Hope this helps!
0 commentaires
Voir également
Catégories
En savoir plus sur Image Data Workflows dans Help Center et File Exchange
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!