The number of workers in PARPOOL is limited to 6 on Linux Cluster

2 vues (au cours des 30 derniers jours)
BenC
BenC le 4 Nov 2019
I'm currently working through some stereoscopic video processing on a Linux cluster with 11 physical processors and 126 GiB RAM on 2019A. Each physical process has 8 cores (Opteron 6300 series). For some reason, if I try to create a PARPOOL larger than 6 workers, it fails on the verification step. I'm currently running an analysis, but I will post the specific error message once it's complete. I was originally restricted to fewer than 3 workers, but I increased the size of my Java heap memory to 8GB (using java.opts in the /bin/glnxa64/ directory). Memory usage at 6 workers does not come near system limits. How can I open this up to take advantage of the other physical processors on this machine, increase the java heap memory again?

Réponse acceptée

Jason Ross
Jason Ross le 5 Nov 2019
It will be very useful to see the error message.
It's also not clear what scheduler you are using -- is this a local scheduler, MJS, etc? I'm also assuming that when you say "11 physical processors" you mean 11 nodes in the cluster with 126 GiB each?
My initial hunch is that you are hitting some limit set in the user environment -- something like file handles, RAM, vmem size, etc. In addition to the actual error message it might be useful to see the output of the shell command "ulimit -a" or "limit", depending on what your system uses.
In my experience it's usually been that the "descriptors" is set too low, and it needs to be increased.
You could also be running out of communications ports or hitting a communications error if you have firewalls set up (and are using Parallel Server)
  3 commentaires
Jason Ross
Jason Ross le 5 Nov 2019
Modifié(e) : Jason Ross le 5 Nov 2019
I suggest upping the "open files" setting. I suspect that you are hitting file handle limits. FWIW mine is set to 4096 on my workstation and I up it to 65535 for some servers.
The exact procedure is slightly different for each OS but in general you edit limits.conf and might need to set something in your shell initilization files.
If you want to try a one-off experiment you can set the limit higher in one shell (something like "limit -n 4096"), and launch MATLAB from this shell. The spawned worker processes should inherit the changed limit.
BenC
BenC le 5 Nov 2019
That was the ticket. Thanks so much. Adjusting the "open files" limit fixed the issue.

Connectez-vous pour commenter.

Plus de réponses (2)

zawye aung
zawye aung le 19 Juin 2020
Any suggestion? I have this problem, help me

zawye aung
zawye aung le 19 Juin 2020
I trying to use mdcs with mjs cluster profile. In my test, i've already passed Admin Center validation. But, when i used the matlab with mjs cluster profile can't passed validation. My problem was shown in figure. I met pool job test fail. So, i want to any suggestion. Please, help me. Thank u!

Catégories

En savoir plus sur Parallel Computing Fundamentals dans Help Center et File Exchange

Produits


Version

R2019a

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by