read inconsistent ascii file to matrix

I'd like to obtain maximum performance in reading a file containing both, numeric and non-numeric lines. The files typically look as such:
% comment
text 1.49
1.52 -5.3 8.9710
3.629 -5.77 9
another text and numbers
% comment again
1 2 3
and so on
The file can easily contain 1 million lines.
I would like to obtain two cell arrays:
  1. One that contains all rows that match %f %f %f , i.e. a numeric triplet. Already parsed as numeric doubles. Invalid lines should show up as empty entries or NaN.
  2. Another matrix, that contains all rows that did not match cell-array 1. Still as cellstr, prefereably with trimmed whitespaces.
Obtaining matrix 2 is sort of simple if you already have 1: simply by issuing textscan, and setting all rows that did not match 1 as empty. However, I struggle in obtaining cell array #1. textscan will stop reading once it encounters invalid lines.
In a working example I used sscanf and parsed everything line-by-line. This took about 15s for 1 million lines. Since textscan can read the whole file in less than a second, I am confident that there is room for improvement...

4 commentaires

Jan
Jan le 27 Mar 2019
Please post the code of your working example. Maybe there is an obvious point to improve the perfomance, perhaps a pre-allocation.
After trying many things, I ended up loading the whole file to memory using:
data=textscan(fid,'%s','delimiter','\n','commentstyle','%');
Next, I used contains with all vowels to find all string occurences (I know a priori that all text-containg rows do have at least vowel, otherwise a regular expression would do the job in similar time).
C=contains(data,["a" "e" "i" "o" "u" "A" "E" "I" "O" "U"]);
Finally, I can sort out all text-based rows and convert the remaining string with
data(C) = [];
nData = sscanf(sprintf('%s ',data{:}),'%f');
However, I do not like this as a solution because I have to use a lot of memory...
Jan
Jan le 1 Avr 2019
Modifié(e) : Jan le 1 Avr 2019
What is the meaning of searching for ["a" "e" "i" "o" "u" "A" "E" "I" "O" "U"] ? What do you call "al lot of memory"? Can you provide an example file?
Tom DeLonge
Tom DeLonge le 9 Avr 2019
Sorry, I was on vacation previous week.
I found it to be the fastest way to find all rows that contain also non-numeric data. As said above, it is faster than a regular expression since in my case all text-containing rows do have a vowel in it.
By a lot of memory I mean that the data array will occupy about 100MByte of RAM for a 10MByte text-file (factor of 10 overhead). While 100MByte is not so dramatic yet, for even larger file this will be even worse.
The file is proprietary, which means I cannot provide an example file. But the few lines I've shown above should come pretty close...

Connectez-vous pour commenter.

 Réponse acceptée

Jan
Jan le 27 Mar 2019
Modifié(e) : Jan le 9 Avr 2019
Data = fileread(FileName);
C = strsplit(Data, char(10));
% [EDITED] Remove comments:
C(strncmp(C, '%', 1)) = [];
match = true(size(C));
NumC = cell(size(C));
for iC = 1:numel(C)
% [EDITED2] Small shortcut:
aC = C{iC};
if ~isempty(aC) && any(aC(1) == '1234567890-.')
[Num, n] = sscanf(aC, '%g %g %g');
if n == 3
NumC{iC} = Num;
match(iC) = false;
end
end
end
TextC = C(match);
Is this your current version using a loop? How long does it take?

5 commentaires

Tom DeLonge
Tom DeLonge le 27 Mar 2019
Modifié(e) : Tom DeLonge le 27 Mar 2019
yes, this is pretty much the logic I used (`%f` instead of %g). This code would take ~15s on my machine. I changed the 30s in the question above. There was additional processing involved in the loop which I missed to remove prior to benchmarking...
Jan
Jan le 9 Avr 2019
I've added a test, if the line starts with a numeric character before SSCANF is called. This might save some time also.
Now use the profiler to find out, which part needs the most time. If it is fileread, it is not worth to struggle with the loop for parsing the data. If it is the parsing, try parfor.
Tom DeLonge
Tom DeLonge le 10 Avr 2019
neat idea to parallelize the second part, thank you!
Indeed it is part two that takes most of the time and with parfoor on a 4-core CPU I was able to cut down the processing time by a factor of 2. Yet, the textscan approach is at least another factor of 2 faster on my machine than the parallelized version. This is somewhat impressive, since sscanf is, afaik, a pretty low-level function. Memory usage is similar in both cases.
Jan
Jan le 10 Avr 2019
@Tom: textscan is fast for valid inputs. Then I expect fscanf to be even faster. But as soon as the input cannot be caught by a simple format specifier, the processing gets much slower.
Some C code will be faster also, but very tedious to write. It must import the file line by line, but you have to create a buffer, which must be able to contain the longest line also. Unfortunately you do not know the length in advance and the same for the number of outputs. Re-allocation the output array dynamically is a mess in C. So maybe the code runs some seconds faster, but you need a lot of hours for writing and testing. Therefore I like MATLAB.
Tom DeLonge
Tom DeLonge le 10 Avr 2019
Yes, I do agree and understand the limitations of textscan. Thank you for the insights!

Connectez-vous pour commenter.

Plus de réponses (1)

Guillaume
Guillaume le 27 Mar 2019
Modifié(e) : Guillaume le 27 Mar 2019
Unfortunately, there's no ignore invalid lines for textscan, so you're going to have to parse the file line by line, or implement the parsing in mex.
The following takes about 10s on my machine for a million lines. It's probably similar to what you've done already:
function [num, text] = parsefile(path)
lines = strsplit(fileread(path), '\n');
num = cellfun(@(l) sscanf(l, '%f %f %f')', lines, 'UniformOutput', false);
text = lines(cellfun(@isempty, num)); %could use cellfun('isempty', num) for a marginal speed gain
end

1 commentaire

Tom DeLonge
Tom DeLonge le 27 Mar 2019
Thanks, this version takes 20 s on my computer and is a bit slower than the one of Jan.

Connectez-vous pour commenter.

Produits

Version

R2019a

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by