Effacer les filtres
Effacer les filtres

Why does matlab.io.​datastore.​MiniBatcha​ble not support paralle processing (multi-gpu training)

1 vue (au cours des 30 derniers jours)
For my application, I need to arrange data in a specific order when they are fed into the CNN during the training stage. To do so, I implemented a custom mini-batchable datastore called CmapDatastore that inherits from the matlab.io.datastore.MiniBatchable class. It works fine with single GPU in training a CNN model. However, when I try to use multiple GPUs in training by setting trainingOptions as:
trainingOptions( ..........
'ExecutionEnvironment', 'multi-gpu', .......)
The error message is:
The MiniBatchable Datastore CmapDatastore does not support parallel operations.
The code of CmapDatastore.m is attached to this messsage.
I would appreciate your help greatly.
Many thanks,
Yong

Réponse acceptée

Yoann Roth
Yoann Roth le 14 Mar 2022
Hello Yong,
To support parallel training for your custom datastore, you need to select the one of the following option
  • Implement MiniBatchable + PartitionableByIndex (see here)
  • Implement Partitionable. This is what is documented here.
Unfortunately, you implemented MiniBatchable + Partitionable and this is not the correct combination.
Usually, the recommendation is just to stick to datastores that we ship (e.g. fileDatastore), and to use the transform function to modify it appropriately.
In your case, it seems that the choice of a custom datastore is justified because the data seems to have a specific structure and shuffle and partition behave in a specific way.
To support parallel training, you could
  • Implement partitionableByIndex it it is not too much effort. It sort of seems that because of the structure of the data, it might not be possible to index it directly.
  • Otherwise, remove the MiniBatchable interface from your datastore, and modify read to not return a table but just a row of data, like so
function [data,info] = read(ds)
info = struct;
data = {read(ds.Datastore), ds.Labels(ds.CurrentFileIndex)};
ds.CurrentFileIndex = ds.CurrentFileIndex + 1;
end
This will put your datastore in the case Partitionable only and should support the multi-gpu option.
  4 commentaires
Yoann Roth
Yoann Roth le 15 Mar 2022
Hi Yong,
Your custom datastore does not have to be "MiniBatchable" to be supported.
Having a custom datastore that is only "Partitionable" will work, and support Parallel training.
Yoann
Yong
Yong le 16 Mar 2022
Hi Yoann,
Thank you very much for your response. Yes, you are correct that Paritionable datastore will work with trainNet.
I have another question: because the read function in Partitionable reads only one file at a time, it seems that when multiple-gpus are used in training, the datastore might be partitioned into segments that don't follow my specific requirement, i.e. the data files are read in a gruop of at least 4 files, and the files are in the order of x, x_rotated_90, x_roated_180 and x_rotated_270. I arrange the files in that order in the datastore. Any suggestion to work around?
Many thanks,
Yong

Connectez-vous pour commenter.

Plus de réponses (0)

Catégories

En savoir plus sur Parallel and Cloud dans Help Center et File Exchange

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by