Effacer les filtres
Effacer les filtres

Tuning ExperienceHorizon hyperparamter for PPO agent (Reinforcement Learning)

17 vues (au cours des 30 derniers jours)
Nicolas CRETIN
Nicolas CRETIN le 18 Juil 2024 à 14:39
Hello everyone,
I'm trying to train a PPO agent, and I would like to change the value for the ExperienceHorizon hyperparameter (Options for PPO agent - MATLAB - MathWorks Switzerland)
When I try another value than the default, the agent wait for the end of the episode to update its policy. For example, ExperienceHorizon=1024 don't work for me, dispite the episode's lenght of more than 1024 steps. I'm also not using Parallel training.
I also get the same issue if I change the MiniBatchSize from its default value.
Is there anything I've missed about this parameter?
If anyone could help, that would be very nice!
Thanks a lot in advance,
Nicolas

Réponses (0)

Produits


Version

R2023b

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by