How can i scale the action of DDPG agent in Reinforcement Learning?
4 vues (au cours des 30 derniers jours)
Afficher commentaires plus anciens
Hello everyone ,
I have an enveriment in simulink whose action should be vary between 0-1. Althought i am using sigmoidLayer at the final layer of the actor, in some episode the action exceed the boundry of 0-1 in the trainig process.
So, how can i fix it?
Maybe the "scailingLayer" help for it, but i don't know all values of the action in whole trainig process. So, the value of the bias and scale in "scailingLayer" command is unknown.
Is there any solution ?
Thax for any help.
0 commentaires
Réponses (2)
Sam Chak
le 1 Août 2023
Hi @awcii
Sound like a constraint to me. This example shows how to train the RL agent for Lane Keeping Assist, where the front steering angle (agent) is only capable of being steered from –15° to 15°.
Hope it helps!
0 commentaires
Emmanouil Tzorakoleftherakis
le 9 Août 2023
DDPG training works by adding noise on top of the actor output to promote exploration. In that case you may see constraint violations, so you can adjust the noise options under ddpg training options (specifically mean and variance) or you can handle the violation on the environment side by adding saturation blocks.
0 commentaires
Voir également
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!