Setting up parallel computations for a single dataset, as opposed to spmd

17 views (last 30 days)
I am working with a program that needs to enter in and out of a parfor loop many times, and the data I am working on is the same for all iterations and for all workers. The code stripped down and commented is below. The key is that the data is being sent to the workers over and over again, but never changes. SPMD or distributed arrays won't help (I believe!) because its the same dataset each time ie I do not want to carve it up into sections. Its the models I need to change, which are much smaller than the data (allWfs).
Is there a way to, say, predistribute an array (allWfs in this case) to each worker and keep it on the workers for the whole calculation?
Code:
%%%%TD_parameters is predefined
%allWfs is the data I am distributing around. It needs to be the same for all models and is never modified.
[ allWfs, ~ ] = load_data_syn(TD_parameters, t);
%%%%%%%%%%%
for iter = 1:TD_parameters.n_iter
%This is the parallel loop, where I do one iteration of the monte-carlo on each model.
parfor i = 1:TD_parameters.n_chains
mset(i) = TD_inversion_function_PT(mset(i), t, TD_parameters, allWfs);
end
%%%%%%%%This is the parallel tempering step that needs to happen after each parfor statement, which is why I am entering an exiting the parallel loop so many times.
inds = randperm(length(mset));
for m1 = 1:length(inds)
for m2 = 1:length(inds)
if mset(inds(m1)).T == mset(inds(m2)).T || m2 >= m1
continue
end
a = (mset(inds(m2)).llh - mset(inds(m1)).llh)*mset(inds(m1)).T;%mset[inds[m2]].lp - mset[inds[m1]].lp;
a = a + (mset(inds(m1)).llh - mset(inds(m2)).llh)*mset(inds(m2)).T;% + mset[inds[m1]].lp - mset[inds[m2]].lp;
thresh = log(rand());
if a > thresh
T1 = mset(inds(m1)).T;
mset(inds(m1)).T = mset(inds(m2)).T;
mset(inds(m2)).T = T1;
end
end
end
%%%%%%models are saved here, removed for conciseness
end

Accepted Answer

Edric Ellis
Edric Ellis on 22 Apr 2022
This is precisely the sort of thing that parallel.pool.Constant was designed for. You build a Constant once on the client, the data is transferred to the workers once, and then you can access it in multiple parfor loops (or spmd blocks...). In your case, you'd use it a bit like this:
[allWfs, ~] = load_data_syn(TD_parameters, t);
allWfsConstant = parallel.pool.Constant(allWfs);
for iter = 1:TD_parameters.n_iter
parfor i = 1:TD_parameters.n_chains
mset(i) = TD_inversion_function_PT(mset(i), t, TD_parameters, allWfsConstant.Value);
end
% etc...
end

More Answers (1)

Joseph Byrnes
Joseph Byrnes on 22 Apr 2022
This certainly looks promising! But I'm not sure I am using this correctly.
If I modify for this format
%%%%%%%%
[allWfs, ~] = load_data_syn(TD_parameters, t);
p = parpool('local');
disp('Distributing data to each worker before running the program')
allWfs_const = parallel.pool.Constant(allWfs);
TD_const = parallel.pool.Constant(TD_parameters);
for iter = 1:TD_parameters.n_iter
ticBytes(p)
parfor i = 1:TD_parameters.n_chains
mset(i) = TD_inversion_function_PT(mset(i), TD_const.Value, allWfs_const.Value);
%mset(i) = TD_inversion_function_PT(mset(i), TD_parameters, allWfs);
end
tocBytes(p)
% etc...
end
%%%%%%%%%%%%%%%
for two workers on my laptop, I get the exact same output if I do the line with the _const variables or if I switch the comments and use the non-const variables. Should I see a difference with ticBytes/tocBytes?
  2 Comments
Joseph Byrnes
Joseph Byrnes on 25 Apr 2022
I can reproduce this numbers to the third decimal (2022a, osx with Big Sur)! I think the test case I was trying did not have a big enough run to show up with ticBytes.
Thank you.

Sign in to comment.

Categories

Find more on Parallel Computing Fundamentals in Help Center and File Exchange

Products


Release

R2021b

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by