parpool memory allocation per worker
30 vues (au cours des 30 derniers jours)
Afficher commentaires plus anciens
Hello, I'm learning to submit batch jobs on SLURM. Realistically, I can request at most
#SBATCH -n 32
#SBATCH --mem-per-cpu=4G
In other words, I can request 32 cores and 128G of memory.
Now, I want to run a global optimization function (MultiStart) in parallel. Currently, I set the number of workers in parpool to be 32 (equal to the number of cores), but I constantly run into out of memory.
I'm curious if setting the number of workers in parpool to be, say, 16 can resolve this issue. If I'm not mistaken, if I set the number of workers in parpool to be 32, each worker has at most 4G of memory to use, whereas if I set the number of workers in parpool to be 16, each worker has at most 8G of memory to use.
I'd be grateful if you can correct me, or confirm what I wrote. Obviously, I can just try, but the problem is it takes a long time to get out of the queue, and the optimization itself takes days, so I want to make sure what I try makes sense before submitting.
Next, assuming what I wrote makes sense, what happens if I set the number of workers in parpool to be, say, 20, so 32/20 = 1.6 is not an integer.
Thank you for your guidance.
0 commentaires
Réponses (3)
Vinayak
le 17 Juil 2024
Hi Jeong,
You are correct in assuming that reducing the number of workers will provide each worker with more memory. It's important to understand the memory requirements of your optimization function before adjusting the number of workers.
Regarding your second question, non-integer memory allocations should not cause any issues. SLURM and `parpool` should manage the resource allocation without problems.
0 commentaires
Milan Bansal
le 17 Juil 2024
Hi Jeong Ho,
Each worker in a parpool has its own independent memory space. They do not share memory directly with each other or with the MATLAB client session. This means that each worker has its own MATLAB process and its own memory allocation.
The amount of memory each worker uses depends on the tasks assigned to it. If a task requires loading large datasets or performing memory-intensive computations, each worker will consume more memory.
The total available system memory is divided among the workers. For example, if your machine has 128 GB of RAM and you start a pool with 32 workers, in theory, each worker could use up to 4 GB of RAM (128 GB / 32 workers). However, in practice, memory usage might not be evenly distributed, and the actual memory available to each worker will depend on the specific tasks and data being processed.
If the memory usage exceeds the available system memory, you may encounter out-of-memory errors. To avoid this, you can:
- Reduce the number of workers in your parpool.
- Optimize your code to use less memory.
You can learn more on the documentation of Parallel Computing Toolbox and parpool.
- https://www.mathworks.com/help/parallel-computing/index.html
- https://www.mathworks.com/help/parallel-computing/parallel-preferences.html
Hope this helps!
0 commentaires
Edric Ellis
le 17 Juil 2024
Further to the other suggestions here, using parpool("Threads") will use less memory than parpool("Processes"), but not everything is supported there. Read more in the doc here: https://www.mathworks.com/help/parallel-computing/parallel.threadpool.html
0 commentaires
Voir également
Catégories
En savoir plus sur Parallel Computing Fundamentals dans Help Center et File Exchange
Produits
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!