Effacer les filtres
Effacer les filtres

Understanding Entropy Loss for PPO Agents Exploration

77 vues (au cours des 30 derniers jours)
Mike Jadwin
Mike Jadwin le 10 Oct 2023
Hello,
I have been experimenting with a PPO agent training on a continous action space. I am a little confused with how the exploration works when using entopy loss. I have mostly used epsilon greedy exploration in the past which seems easier to understand in terms of how the agent explores (taking random actions with probability epsilon, and epsilon decay is easy to calculate knowing the decay rate). This means I know exactly the number of training iterations where the agent should start relying on the trained policy instead of exploring. Im not able to understand how the entropy term controls exploration in the same sense.

Réponses (1)

Emmanouil Tzorakoleftherakis
Hi,
In PPO, the goal of training is to strike a balance between the entropy term and fine tuning the probabilities for all available action. This happens throughout training, as, unlike epsilon greedy approach, exploration in PPO does not diminish over time. This page and references therein should be helpful.
Also, don't forget that PPO is stochastic so there is always some exploration happening when sampling the action distribution. If after training you want to just use the action mean (i.e. not sample to get the policy output), you can set this option to 0.
Hope this helps
  4 commentaires
Mike Jadwin
Mike Jadwin le 29 Fév 2024
yeah I dont think what I tried was very ideal but heres what I did: Set a specific number of trianing epochs you want to complete for each learning rate. For example you can start with high entropy for maybe 1000 epochs, then you take that trained agent and initialize a new agent training paramters with that agent as the initialization and a lower entropy term. Its not ideal because every time you kick off a new training session it will open a new training history window, so depending on how many times you do this it can get pretty cluttered. Especially if you want to do a linear decay where the entropy is changing frequently, I had to just turn off the plotter so it doesnt refresh everytime. It might be worth finidng another agent since there is not a built in way to aneal the exploration in the default PPO agent. Seems like its designed to explore throughout the entire training time, which to me seemed to result in unstable or suboptimal results.
Mohammed Mohiuddin
Mohammed Mohiuddin le 15 Avr 2024
Thank you for your suggestion. I tried this approach and it seemed to work but like you said it is not a very efficient approach.

Connectez-vous pour commenter.

Produits


Version

R2022a

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by