http://www.perlmonks.org?node_id=1014517

ibm1620 has asked for the wisdom of the Perl Monks concerning the following question:

I have a file of about 1B records, each containing 50 tab-separated fields. I'm writing a small app to pass this file on an ad-hoc basis, selecting and concatenating various of these fields.

For example: if the input file contains

FIRST\tMIDDLE\tLAST\tSTRNO\tSTRNAME\tCITY\tSTATE\tZIP\t...\n

I might wish to extract fields 3, 1, and 6:

LAST\tFIRST\tCITY\n

My current approach begins by splitting each record into a 50-element array and then accessing each by index. It's very slow. Can someone suggest a superfast approach to doing this, given only the set of indices I wish to extract?

Replies are listed 'Best First'.
Re: selecting columns from a tab-separated-values file
by Kenosis (Priest) on Jan 21, 2013 at 23:27 UTC

    Consider slicing a split to get only the elements you need, instead of splitting the entire line, as it benchmarks significantly faster:

    use strict; use warnings; use Benchmark qw(cmpthese); my $line = "FIRST\tMIDDLE\tLAST\tSTRNO\tSTRNAME\tCITY\tSTATE\tZIP" . " +\tFOO" x 42; sub trySplit { my @capture = split /\t/, $line; } sub trySplitSlice { my @capture = ( split /\t/, $line )[ 0, 2, 5 ]; } sub trySplitSliceLimit { my @capture = ( split /\t/, $line, 7 )[ 0, 2, 5 ]; } cmpthese( -5, { trySplit => sub { trySplit() }, trySplitSlice => sub { trySplitSlice() }, trySplitSliceLimit => sub { trySplitSliceLimit() } } );

    Results:

    Rate trySplit trySplitSlice trySplit +SliceLimit trySplit 110337/s -- -46% + -84% trySplitSlice 204730/s 86% -- + -71% trySplitSliceLimit 708158/s 542% 246% + --

    Update: Have added choroba's trySplitSliceLimit() option to the benchmarking.

    Update II: Thanks to AnomalousMonk, have appended "\tFOO" x 42 to the original string to create a string with 50 tab-delimited fields. This effectively shows the speed increase using trySplitSliceLimit().

    Update III: Changed splitting on ' ' to \t. Thanks CountZero.

      Limiting the splice makes it even faster:
      sub trySplitSliceLimit { my @capture = (split ' ', $line, 7)[ 0, 2, 5 ]; }
      لսႽ† ᥲᥒ⚪⟊Ⴙᘓᖇ Ꮅᘓᖇ⎱ Ⴙᥲ𝇋ƙᘓᖇ

        Yes--excellent! Will modify my posting to include this option.

        This turns out to make the biggest difference, far and away. Of course, it depends on what fields are chosen - in particular, where the rightmost one is.

      How about using a 'record' string with more fields to better show the effect of a split limit? Just append something like  "\tFOO" x 42 to the existing eight-field string.

        Yes--impressive! Doing so makes a huge difference in the benchmarking. Will update the update...

      This is nice. Couple of questions:

      What's the first arg to split()? It appears to be a single blank char. How does that work to split upon tab chars?

      In trySplitSliceLimit, wouldn't it be better to set LIMIT to 3, or in general to the number of fields you expect to extract?

      And one observation for the record: the indices can appear in any order. To extract LAST, FIRST, and CITY, you'd write [2, 0, 5]

        What's the first arg to split()? It appears to be a single blank char. How does that work to split upon tab chars?

        It's a space enclosed within single quotes. It tells split to split on whitespace, e.g., \t, \n, space.

        In trySplitSliceLimit, wouldn't it be better to set LIMIT to 3, or in general to the number of fields you expect to extract?

        It should be set to the number of fields plus one that are needed to get the fields you want. For example, using your original string:

        "FIRST\tMIDDLE\tLAST\tSTRNO\tSTRNAME\tCITY\tSTATE\tZIP" 1 2 3 4 5 6 7 ----> my @capture = ( split /\t/, $line, 7 )[ 2, 0, 5 ];

        You want LAST FIRST CITY and CITY is the sixth field. Setting the LIMIT to seven will return the first six fields and the remainder of the string is the seventh. The slice is then used on those seven to get only the three you want.

        And one observation for the record: the indices can appear in any order. To extract LAST, FIRST, and CITY, you'd write [2, 0, 5]

        You're correct!

        Update: Changed splitting on ' ' to \t. Thanks CountZero.

Re: selecting columns from a tab-separated-values file
by johngg (Canon) on Jan 21, 2013 at 22:58 UTC

    You might be better off tolerating the expense of loading your data into a database of some sort once, perhaps SQLite, and thereafter take advantage of the speed of the database which is optimised for data look-ups, using SQL queries to access your columns. DBI and DBD::SQLite would facilitate this.

    I hope this is helpful.

    Cheers,

    JohnGG

      Interesting idea. Can SQLite handle that capacity? 10^9 rows and about 80GB of data total?
        give a go and you'll find out ;) or you can look here: http://sqlite.org/limits.html
Re: selecting columns from a tab-separated-values file
by hippo (Bishop) on Jan 21, 2013 at 22:59 UTC

    If by "1B" you mean 10^9 and if your fields have mean length 9 chars, then including tabs you have roughly 500GB in one file, correct? I'm not surprised it is very slow. How fast to just cat the file? How much slower is your script?

    Best advice is buy the fastest disk you can afford. And maybe think about preprocessing.

      Yes, 10^9 records. The entire file is more like 80GB, since there are frequent empty fields. I estimated the time for my Perl program to pass the entire file to be around 5 hours. Don't know how much faster cat is, but it's GOT to be faster than that (I'm not at work at the moment).
Re: selecting columns from a tab-separated-values file
by BrowserUk (Patriarch) on Jan 22, 2013 at 09:59 UTC

    What most the responses so far seem to have missed is that the cast of the in-ram spliting and joining of records, pales into insignificance when compared to the time required for the read-a-record, write-a-record flip-flopping of the read head back and forth across the disk.

    There are two possible ways to speed up your processing:

    1. Overlap the spliting & joining with either the reading or the writing (or both).

      This can be achieved by using two or more threads or processes.

    2. Read and write larger chunks of the file to minimise the number of seeks the read heads need to make.

      Interleaving 80GB/4k = 20 million reads, and (~) 200,000 write means (at least) 400,000 track to track seeks.

      If you increase the read & write chunk sizes to 1MB each, that can be reduced to ~1,500 track to track seeks.

    Try these:

    • ibuf.pl:
      #! perl -sw use strict; our $B //= 64; $B *= 1024; my $ibuf = ''; while( sysread( *STDIN, $ibuf, $B, length $ibuf ) ) { my $p = 1+rindex( $ibuf, "\n" ); my $rem = substr( $ibuf, $p ); substr( $ibuf, $p ) = ''; open my $RAM, '<', \$ibuf; print while <$RAM>; $ibuf = $rem; }
    • obuf.pl:
      #! perl -sw use strict; our $B //= 64; $B *= 1024; my $obuf; while( <> ) { my @f = split chr(9); $obuf .= join( chr(9), @f[2,0,5] ) . "\n"; if( length( $obuf ) > $B ) { print $obuf; $obuf = ''; } } print $obuf; ## Corrected C&P error. See [lotus1]'s post below.

      Usage: ibuf -B=1024 < in.tsv | obuf.pl -B=1024 > out.tsv.

      Based on my experiments, it might be possible to more than halve your processing time, though YMMV.

      Experiment with adjusting the -B=nnn (in kbs) parameters up & down in unison and independently to find the sweet spot on your system.

      Be aware, bigger is not always better. 1024 for both seems to work quite well on my system, anything larger slows it down.


    With the rise and rise of 'Social' network sites: 'Computers are making people easier to use everyday'
    Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
    "Science is about questioning the status quo. Questioning authority".
    In the absence of evidence, opinion is indistinguishable from prejudice.

      I just tried your solution with an input file of 100,000 lines but I only got 65537 lines of output. Here is the command line I used to run the test:

      perl ibuf.pl -B=1024 < testdata.csv | perl obuf.pl -B=1024 > out.csv

        Damn! I missed/dropped the last line of obuf.pl when I C&P's the code:

        print $obuf;

        Now corrected above, Thank you.


        With the rise and rise of 'Social' network sites: 'Computers are making people easier to use everyday'
        Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
        "Science is about questioning the status quo. Questioning authority".
        In the absence of evidence, opinion is indistinguishable from prejudice.
      My Perl program was consuming 100% of one CPU core, leading me to think it was unlikely to be I/O-bound.

        The construction of the arrays from the split lines consumes a fair amount of CPU. As does the allocation of the buffers for reading into and writing from. As does the searching of the 4K buffers to locate the line ends. And the search of the lines to find the tabs.

        But still, the minimum elapsed time is dominated by the time it takes to a) read 80 GB from disk; whilst simultaneously writing ~5 GB back to disk. On my system 2.6 GHz 4-core Q6600 + 7200 RPM SATA2 disk with 16MB ram, that reading and writing process -- with no other processing; searching; memory allocations etc. -- requires a minimum of 56 minutes.

        The difference between that and your 5 hours comes in two parts.

        1. The internal processing of split & join.
        2. The waiting for IO.

        If you can overlap the IO and the internal processing, you can claw back some of that difference.

        Have you tried my two processes solution?


        With the rise and rise of 'Social' network sites: 'Computers are making people easier to use everyday'
        Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
        "Science is about questioning the status quo. Questioning authority".
        In the absence of evidence, opinion is indistinguishable from prejudice.

      Read and write larger chunks of the file to minimise the number of seeks the read heads need to make.

      Sometimes there's the luxury of having two hard drive spindles. Use one for reading, the other for writing, and find your I/O-bound application much sped up. (It works even when the other spindle is over a network share.) Or the other kind of luxury of having an SSD: costless seeks.

    A reply falls below the community's threshold of quality. You may see it by logging in.
Re: selecting columns from a tab-separated-values file
by mildside (Friar) on Jan 22, 2013 at 00:00 UTC

    What OS are you using? This answer might seem a bit odd given this is a Perl forum, but sometimes the best solution isn't Perl. (Blasphemy?)

    If you are in a unix environment, and all you need to do is extract the fields as described above then you can use the unix command line program 'cut'. This does exactly what you want without needing to write any code.

    cut -f3,1,6 inputfile.txt > outputfile.txt

      Very cool command, thanks for posting it.

      I just tried it and noticed that it prints out the fields 3,1,6 but in the order 1,3,6. It went through a file with 10^6 lines in about 7 seconds and 2*10^6 lines in 10 seconds. So 10^9 lines should take about an hour. My test data only had 9 fields not 50.

      I'm running on Windows XP by the way with 8 cores. I Installed GNU textutils a long time ago and am always surprised when I find out about these things I have but don't how to use.

        You can't reorder the output fields with 'cut', but if you have 'sed' you can do this:
        $ cat t FIRST MIDDLE LAST STRNO STRNAME CITY STATE ZIP $ sed -n 's/\(.*\t\).*\t\(.*\t\).*\t.*\t\(.*\t\).*\t.*/\2\1\3/p' t LAST FIRST CITY
      Forgot to mention, it should be fast too!
      I'm on CentOS, and (as pointed out), cut won't, er, cut it because I can't reorder the fields. (Also I need to do a little bit of processing.)

        Yes sorry, I'd forgotten about the limitation of cut keeping the original field order (it's fast though!).

        Given you need to do a bit of processing as well I think the post by Kenosis should get you well on the way.

Re: selecting columns from a tab-separated-values file
by Tux (Canon) on Jan 22, 2013 at 07:37 UTC

    Nobody suggested Text::CSV_XS yet. If your data is straightforward: no quotations, no embedded tabs or other hiding disasters, it will be slower than a plain split on TAB, but it is very versatile when dealing with the data

    use Text::CSV_XS; my $csv = Text::CSV_XS->new ({ binary => 1, sep_char => "\t", auto_d +iag => 1 }); my @row = ("") x 50; $csv->bind_columns (\(@row)); while ($csv->getline ($fh)) { my @capture = @row[0, 2, 5]; }

    Enjoy, Have FUN! H.Merijn
      Good point; I have used CSV_XS to great advantage before! (However, nothing complicated about this input.)
Re: selecting columns from a tab-separated-values file
by andal (Hermit) on Jan 22, 2013 at 08:49 UTC

    Maybe it makes sense to work with the file directly. When reading it line by line and then splitting lines, your program scans through the data 2 times: first time to find '\n', second time to find '\t'. If you would write program that reads file in blocks of fixed size and then scans them in search for both '\t' and '\n' then you'll be scanning that amount of data only once, which could give speed-up.

Re: selecting columns from a tab-separated-values file
by topher (Scribe) on Jan 24, 2013 at 20:33 UTC

    Here are a few suggestions based on experience dealing with processing lots of very large log files (not a perfect match, but a lot of which would apply here).

    • First suggestion, which I think I saw that you've now tested, is to try splitting the logic into separate programs. With multiple CPU's, if you your processing is CPU bound, splitting the work into multiple piped programs can allow you to take advantage of those CPUs, and will frequently outperform a single-app solution.

    • I also saw mention that it was a large file, 80GB or something like that? If it doesn't get updated frequently, you might want to try compressing the file. Using a fast compression algorithm (gzip, lzo, etc.) you can often read and decompress data faster than you could read the uncompressed data from disk. With multiple CPU's, this can result in a net improvement in performance. This is worth testing, of course, as it will depend heavily on your specific circumstances (disk speed, CPU speed, RAM and disk caching, etc.)

    • Another possible suggestion, depending on many hardware/system factors, would be to split it up, along the lines of Map-Reduce. Split your file (physically, or logically), and process the chunks with separate programs, then combine the results. A naive example might be a program that gets the file size, breaks it into 10GB chunks, then forks into a corresponding number of child processes, each of which does work on it's chunk of the file, while returning the results to be aggregated at the end.

    • If you don't need to return a result set for every single record in the file every time, then the suggestion to try a real database is an excellent one. I love SQLite, and it can handle quite a bit (although 80GB might be pushing it), but if you only wanted to return results for a smaller matching subset of the data, you're almost certainly going to win big with a database.

    • If you really wanted to squeeze every bit of performance out of this, and optimize it to the extreme, you'd do well off to read about the Wide Finder Projects/Experiments kicked off by Tim Bray:

      It's worth googling for "Wide Finder" and checking out some of the other write-ups discussing it, too.

    Much as I love Perl, I probably would have done something like this as a first shot for the described processing:

    $ zcat datafile.gz | awk -F'\t' '{print $3,$1,$6}' | gzip -c > output.gz

Re: selecting columns from a tab-separated-values file
by stonecolddevin (Parson) on Jan 23, 2013 at 18:08 UTC

    I've found great success using good ol' Text::CSV_XS and reading in say, 10k lines at a time.

    UPDATE: I can attest to the speed as I've been dealing with moving several terabytes of image data off of or around on AWS S3, which usually involves dumping an enormous number (40 million) of rows of csv data and having to run through it one way or another.

    Three thousand years of beautiful tradition, from Moses to Sandy Koufax, you're god damn right I'm living in the fucking past

Re: selecting columns from a tab-separated-values file
by Anonymous Monk on Jan 22, 2013 at 12:53 UTC
    1B == 1 Byte???
      Yes. Yes, one byte. A very LARGE byte, however.
Re: selecting columns from a tab-separated-values file
by sundialsvc4 (Abbot) on Jan 22, 2013 at 22:05 UTC

    I would footnote the previous discussion with BrowserUK by making one other important point ...

    While it’s easy to assume that a file-I/O request by a userland program corresponds 1-to-1 with a disk I/O operation, in fact it does not.   The request might be satisfied directly through a pre-read buffer.   If I/O does occur, it might involve the reading of multiple directory or file-allocation structures in order to locate the point where the data may be found.   Further, many operating systems provide various means by which it is possible to “cue” the operating system that this particular program will be making sequential reads ... or, the OS might discover such things for itself.

    All of these things are centered around the notion that a computer program ought to be able to just be the simplest expression of what it has been built to do.   If it is “reading from one stream and writing to another,” then ... so be it.   Why, indeed, should a program have to “put more thought into” such a simple requirement?

    In short ... maybe my entire previously-argued point is specious, if not wrong.   Maybe the operating system does know best.   Maybe it’s best that it does.

      I probably should have mentioned this before: my Perl program was consuming 100% of one CPU core, leading me to think it was unlikely to be I/O-bound.