- During the training phase, none of the episodes reached 2 ExperienceHorizon’s worth of steps due to reaching termination conditions early. So the policy update might be happening with a combination of steps from different episodes.
- The MaxStepsPerEpisode parameter in the training options could be more than ExperienceHorizon, thereby causing the episode to terminate early. By training options I mean the ones that are passed to the train() function via an argument list, or via a rlTrainingOptions object (https://www.mathworks.com/help/releases/R2023b/reinforcement-learning/ref/rl.option.rltrainingoptions.html)
- The MiniBatchSize parameter defines the size of the chunks that the experience buffer is divided into before running an epoch of training on the policy network. If ExperienceHorizon is less than MiniBatchSize, it could cause issues. So, ensure that ExperienceHorizon is a multiple of MiniBatchSize.
Tuning ExperienceHorizon hyperparamter for PPO agent (Reinforcement Learning)
4 vues (au cours des 30 derniers jours)
Afficher commentaires plus anciens
Nicolas CRETIN
le 18 Juil 2024
Commenté : Nicolas CRETIN
le 2 Août 2024
Hello everyone,
I'm trying to train a PPO agent, and I would like to change the value for the ExperienceHorizon hyperparameter (Options for PPO agent - MATLAB - MathWorks Switzerland)
When I try another value than the default, the agent wait for the end of the episode to update its policy. For example, ExperienceHorizon=1024 don't work for me, dispite the episode's lenght of more than 1024 steps. I'm also not using Parallel training.
I also get the same issue if I change the MiniBatchSize from its default value.
Is there anything I've missed about this parameter?
More infos on PPO algorithms: Proximal Policy Optimization (PPO) Agents - MATLAB & Simulink - MathWorks Switzerland
If anyone could help, that would be very nice!
Thanks a lot in advance,
Nicolas
0 commentaires
Réponse acceptée
Alan
le 1 Août 2024
Modifié(e) : Alan
le 1 Août 2024
Hi Nicolas,
I could not figure out how to record the episode or step index at which the agent’s policy is updated so I could not verify the behaviour of various combinations of options.
From my understanding, I could think of the following possibilities for updating the policy late:
I hope this helped.
-Alan
Plus de réponses (0)
Voir également
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!