Multi-Step (D)DQN using Parallelization
7 vues (au cours des 30 derniers jours)
Afficher commentaires plus anciens
I have noticed that with the current implementation of DQN it is not possible to use both multi-step returns (NumStepsToLookAhead>1) and parallelization. However I noticed that having multi-steps is essential for my application. Still, I would love to make use of all of my cpu cores.
Thus, I am wondering if it is possible to implement a custom DQN agent that allows for this. My goal is to arrive at an implementation where multiple workers generate experience samples. Learning is performed centrally and the updated policy is returned to the worker regularily.
Is this a reasonable idea? If yes, does anybody have an idea how I can implement this without generating too much code duplicates of the default dqn implementation?
Thank you very much.
0 commentaires
Réponses (1)
Ayush Modi
le 20 Oct 2023
Hi David,
As per my understanding, you would like to generate experience samples at worker nodes by training the model with local data and then send these model parameters to central server to train the central model.
You can achieve this by using the concept of Federated Learning.
Please refer to the following MathWorks documentation for more information on Federated Learning:
You can create a custom DQN/DDQN model as well.
I hope this resolves the issue you were facing.
0 commentaires
Voir également
Catégories
En savoir plus sur Deep Learning Toolbox dans Help Center et File Exchange
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!