Volatile GPU-Util is 0% during Neural network training
8 vues (au cours des 30 derniers jours)
Afficher commentaires plus anciens
Hello.
I would like to train my neural network with 4 GPUs (on a remote server)
To utilize my GPUs I set ExecutionEnvironment in the training option to 'multi-gpu'
However, the Volatile GPU-util remains at 0% during training.
It seems that the data load on the GPU memory.
I would appreciate your help.
8 commentaires
Joss Knight
le 12 Sep 2023
Modifié(e) : Joss Knight
le 12 Sep 2023
Right, so the parfor is opening a pool with a lot of workers (presumably you have a large number of CPU cores); but unfortunately these are then not used for your preprocessing during training. You need to enable DispatchInBackground as well. Try that. You should have received a warning on the first run, telling you that most of your workers were not going to be used for training.
It does look as though the general problem is that your data preprocessing is dominating the training time meaning only a small proportion of each second is being spent computing gradients, and this is what the Utilization is measuring. If DispatchInBackground doesn't help we can explore further how to vectorize your transform functions; you might also consider using augmentedImageDatastore, which provides most of what you need. Or you could preprocess data on the GPU.
Réponse acceptée
aditi bagora
le 25 Sep 2023
The error message indicates that there is an issue while distributing the data parallelly in the background. To fix the issue, the class "CustomImageDatastore" needs to implement an additional class "matlab.io.datastore.Subsettable." which will support parallel and multi-GPU environment.
For further details, refer the below documentation link.
Hope this helps you in solving the error.
Plus de réponses (0)
Voir également
Catégories
En savoir plus sur Image Data Workflows dans Help Center et File Exchange
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!