reinforcement learning and DDPG agent problem
2 vues (au cours des 30 derniers jours)
Afficher commentaires plus anciens
beni hadi
le 18 Sep 2020
Commenté : beni hadi
le 19 Sep 2020
I used a deep reinforcement learning toolbox to path planning of a robot, including the DDPG algorithm. My scenario is that the robot starts from a random position and reaches the random goal location. After training, the result is a fixed path! And with changing the goal position, the path does not change. It is as if the network has learned only one path. The Drop-out layer is used in the network structure.
Does anyone have any idea what went wrong?
0 commentaires
Réponse acceptée
Emmanouil Tzorakoleftherakis
le 18 Sep 2020
Looks like training was not successful. There could be many things at fault here - some suggestions:
1) Make sure you are randomizing the target locations at the beginning of each episode. It would help if you add visualization to actually verify targets move/debug the agent's behavior during training
2) The agent may not have enough information available to make decisions. Make sure the observations provide enough info to the agent
3) What does the episode manager plot look like when training stops? You may need to train the agent for more time
4) Why are you using a dropout layer? Unless your observations are images, this layer islikely not required (at least I don't think I have seen it in any shipping examples in Reinforcement Learning Toolbox). So your neural network architecture may also have something to do with this behavior.
Plus de réponses (0)
Voir également
Catégories
En savoir plus sur Deep Learning Toolbox dans Help Center et File Exchange
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!