Create and Train DQN Agent with just a State Path and Not Action Path
1 vue (au cours des 30 derniers jours)
Afficher commentaires plus anciens
Huzaifah Shamim
le 5 Juil 2020
Commenté : Huzaifah Shamim
le 6 Juil 2020
Every example I have seen of a DQN on MATLAB is with two inputs, the state and action. However, it is possible for DQN RL to be done with just one input, the state but there are no examples for that case. How can that be done on MATLAB? My Input would basically be a binary vector and my output would be that I can do two actions?
Basically I am trying to recreate this: http://cwnlab.eecs.ucf.edu/wp-content/uploads/2019/12/2019_MLSP_ANCS_NAZMUL.pdf
0 commentaires
Réponse acceptée
Emmanouil Tzorakoleftherakis
le 6 Juil 2020
Hello,
This page shows how this can be done in 20a. We will have examples that show this workflow in the next release.
Hope that helps.
9 commentaires
Emmanouil Tzorakoleftherakis
le 6 Juil 2020
This sounds doable.You may even be able to do this without custom loops using built-in agents (something like centralized multi-agent RL). You can use a single agent and at each step extract the appropriate action and apply it to the appropriate part of the environment. The tricky part is (typical of multi-agent RL) to pick the right amount of observation to make sure your process is Markov. This will likely require observations from each 'subagent' etc.
Plus de réponses (0)
Voir également
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!