speeding up using cellfunction and arrayfun versus for-loop

38 vues (au cours des 30 derniers jours)
gujax
gujax le 6 Juil 2021
Commenté : Jan le 9 Juil 2021
I find that cellfun and arrayfun is slower by 1.6x.
Trying to understand why is that?
I have a data matrix DM of size(4 x 2e8) unit8 type data.
This matrix further needs to be split into different types scattered all over
I need to evaluate different functions on each of these different types.
So I just compared timing on a type1.
First I used an arrayfun to retrieve type1 submatrices needed. Matrix were different sizes so I converted each matrix to a row form and made them uniform size filling up with NaNs .
%submatrix indexed by NFr_D
M=arrayfun(@(x) DM(NFr_D(x)+64: NFr_D(x)+64 + 4 * Header_length(x)-1), 1:length(NFr_D),'UniformOutput',false);
l=cellfun(@length,DM); % return length of each cell in cell array c
L=max(l);
n=arrayfun(@(l) nan(1,L-l),l,'uni',0); % create vector of NaN to make each equal length
for i=1:length(M) % and make the array uniform
dM(i)={[M{i} n{i}]};
end
so that is a dM: 1.6e6 cells with 1x512 uint data
Then I used cellfun to work on these cells.
Out = cellfun(@BuildData, dM,num2cell(LSB_Cnt'),'UniformOutput',false);
Some steps taken:
  1. Convert this 512 to a 4x[] matrix dM
  2. Take away rows with all zeros from each matrix
  3. combine columns 1 and 2. convert to binary (16 bits) and seek last 11 bits
  4. take relevant 4 bits and compare it to another vector whose length is same as the length of cells i.e., 1.6e6 and take some action
  5. Read out the next two columns
  6. Output each column
function Dat_M = BuildData(DMat,LSB_FrCnt)
dM=reshape(dM,4,[]);
nozero = all(dM~=0 | ~isnan(dM),1);
dM = dM(:,lnozero);
D_t = [dec2bin(dM(2,:),8) dec2bin(dM(1,:),8)];
tm = double(bin2dec(D_t(:,6:16)));
fr =(bin2dec(D_t(:,2:5))~=LSB_Cnt);
wx = dM(3,:);
wy= dM(4,:);
%Out = {tm, fr, wx, wy}; %Hmm dont know how to output cells of different size but let me test just one output type
Out =tm
arrayfun to break the DM to smaller subunits took me ~0.68 min
cellfun to provide the output took about 6 min.
When I use a for-loop, in which I do not have to convert to uniform size, nor extract the submatrices but just jump to that row and perform the above 1-6 steps, it takes
~4.2 min
And I use same expressions in both and also make sure I am using logical indexing for speed
I wonder if there is a good way to speed up the calculations.
  2 commentaires
Matt J
Matt J le 6 Juil 2021
Modifié(e) : Matt J le 6 Juil 2021
Well, there is no expectation that arrayfun and cellfun will be faster than a for-loop. It's just a way to abbreviate code.
As for how to accelerate your best time so far, you would need to clarify the first part of the computation a bit more. You say that DM has 4 rows and 2e8 columns, but in your call to arrayfun you are indexing DM with a single subscript.
DM(NFr_D(x)+64: NFr_D(x)+64 + 4 * Header_length(x)-1)
Is this deliberate?
gujax
gujax le 6 Juil 2021
Yes this is deliberate because I have linear indices which tell me from where to start extracting the submatrix. I could generate sub indices but realized it wasn’t necessary (though if use of sub indices speeds up I should try it)

Connectez-vous pour commenter.

Réponse acceptée

Jan
Jan le 6 Juil 2021
The question is not clear. Please post some code, which reproduces your measurements.
There are several details in your code, which influence the speed such that a comparison is impeded:
  • cellfun(@length,DM) faster: cellfun('length',DM);
  • is dM pre-allocated?
  • This is not efficient for passing with NaNs:
l = cellfun(@length, DM);
L = max(l);
n = arrayfun(@(l) nan(1,L-l),l,'uni',0);
for i=1:length(M)
dM(i) = {[M{i} n{i}]};
end
Move the curly braces to the left side of assignments. If they are on the right, you waste time with creating a cell array, which is copied, instead of writing into an existing cell:
c = cell(1:10);
% Slow:
for k = 1:10
c(k) = {k}; % 10 cell arrays are created
end
% Faster:
for k = 1:10
c{k} = k; % Just the value is copied into existing cell
end
With some other ideas:
len = cellfun('prodofsize', DM);
L = max(len);
dM = cell(1, numel(M)); % Pre-allocate!!!
for i = 1:numel(M) % numel() is saver than length()
dM{i} = [M{i}, nan(L - len(i))];
end
dec2bin() and bin2dec() is not efficient. Stay at numbers, if you have them as input and want them as output. Use bitget() and bitand() to extract certain bits.
You code seems to have a great potential for speed improvements. So a comparison of cellfun, arrayfun or loops is not really smart yet. But it is expected, that loops are faster: cellfun and arrayfun are mex functions, which have to call the Matlab level for each element. An exception are the commands provided as char vectors to cellfun(): 'isempty', 'islogical','isreal', 'length', 'ndims', 'prodofsize', 'size', 'isclass'. They are processed in the mex level and faster.
cellfun and arrayfun are cool and allow a compact notation. But they are not designed to be faster than loops.
  3 commentaires
gujax
gujax le 7 Juil 2021
Speed improved a bit after following Matt J suggestion not using linear indexing
However, bitset did not work at all. Bitset does not work on arrays
I have no idea why e.g., bitset([98 88 78],1:4) returns error "inputs must have same size". Documentation offered no help so I gave up...
Also eventually I have to convert bits to decimal so have to use again bin2dec. Moreover, I thought of pulling out this conversion outside the forloop (when using forloop) but that also does not help because the size of each cell is different and I have to go through the same process of building NaN's to make them uniform and then bin2dec fails on NaN.
Not sure why bitget or bitand will help me because they are extra functions anyway which I need to call.
Am I missing something?
Jan
Jan le 9 Juil 2021
The code you have posted uses a lot of variables without an explanation or comments. If you post a minimal working example, it would be easier to reproduce the outputs with a higher speed. Currently the readers have to guess some details, but this is not a save way to optiomize code.

Connectez-vous pour commenter.

Plus de réponses (0)

Catégories

En savoir plus sur Matrix Indexing dans Help Center et File Exchange

Produits


Version

R2018b

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by