- In your "rlTrainingOptions" object, if "UseParallel" is set to "True", and the actor and critic are set to use GPU, then MATLAB will automatically use multiple GPUs for training. In this case, calling "train" in a "parfor" or "spmd" is not supported.
- If in the "rlTrainingOptions" object, "UseParallel" is set to "False" and the actor and critic are set to use GPU, you may call "train" in a "parfor" loop.
How can I optimize GPU usage while training multiple RL PPO Agents using multiple GPUs?
14 vues (au cours des 30 derniers jours)
Afficher commentaires plus anciens
MathWorks Support Team
le 6 Mar 2024
Réponse apportée : MathWorks Support Team
le 18 Mar 2024
I wish to train multiple PPO agents asynchronously and using multiple GPUs. What is the best way to optimize GPU and CPU resources to achieve this?
Réponse acceptée
MathWorks Support Team
le 6 Mar 2024
If the network size is small, the best way to train would be to just train on CPU in a parallel pool instead of using a GPU, with an appropriate number of workers. This may be the most effective workaround considering that PPO tends to be better with larger training datasets and network sizes may not be big enough to impact a huge change by training on GPU instead.
If training on GPUs, please ensure that you restrict the parallel pool worker count to the same number as the number of GPUs available. This way, each worker can access a unique GPU and perform training. For more information on training using multiple GPUs, please refer to the following page:
With reference to the information in the above link, please keep the following additional information in mind:
0 commentaires
Plus de réponses (0)
Voir également
Catégories
En savoir plus sur Parallel and Cloud dans Help Center et File Exchange
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!