Beefy Boxes and Bandwidth Generously Provided by pair Networks
Perl Monk, Perl Meditation
 
PerlMonks  

Re: selecting columns from a tab-separated-values file

by BrowserUk (Pope)
on Jan 22, 2013 at 09:59 UTC ( #1014614=note: print w/ replies, xml ) Need Help??


in reply to selecting columns from a tab-separated-values file

What most the responses so far seem to have missed is that the cast of the in-ram spliting and joining of records, pales into insignificance when compared to the time required for the read-a-record, write-a-record flip-flopping of the read head back and forth across the disk.

There are two possible ways to speed up your processing:

  1. Overlap the spliting & joining with either the reading or the writing (or both).

    This can be achieved by using two or more threads or processes.

  2. Read and write larger chunks of the file to minimise the number of seeks the read heads need to make.

    Interleaving 80GB/4k = 20 million reads, and (~) 200,000 write means (at least) 400,000 track to track seeks.

    If you increase the read & write chunk sizes to 1MB each, that can be reduced to ~1,500 track to track seeks.

Try these:

  • ibuf.pl:
    #! perl -sw use strict; our $B //= 64; $B *= 1024; my $ibuf = ''; while( sysread( *STDIN, $ibuf, $B, length $ibuf ) ) { my $p = 1+rindex( $ibuf, "\n" ); my $rem = substr( $ibuf, $p ); substr( $ibuf, $p ) = ''; open my $RAM, '<', \$ibuf; print while <$RAM>; $ibuf = $rem; }
  • obuf.pl:
    #! perl -sw use strict; our $B //= 64; $B *= 1024; my $obuf; while( <> ) { my @f = split chr(9); $obuf .= join( chr(9), @f[2,0,5] ) . "\n"; if( length( $obuf ) > $B ) { print $obuf; $obuf = ''; } } print $obuf; ## Corrected C&P error. See [lotus1]'s post below.

    Usage: ibuf -B=1024 < in.tsv | obuf.pl -B=1024 > out.tsv.

    Based on my experiments, it might be possible to more than halve your processing time, though YMMV.

    Experiment with adjusting the -B=nnn (in kbs) parameters up & down in unison and independently to find the sweet spot on your system.

    Be aware, bigger is not always better. 1024 for both seems to work quite well on my system, anything larger slows it down.


With the rise and rise of 'Social' network sites: 'Computers are making people easier to use everyday'
Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
"Science is about questioning the status quo. Questioning authority".
In the absence of evidence, opinion is indistinguishable from prejudice.


Comment on Re: selecting columns from a tab-separated-values file
Select or Download Code
Re^2: selecting columns from a tab-separated-values file
by sundialsvc4 (Monsignor) on Jan 22, 2013 at 16:04 UTC

    Take hammer.   See nail.   Hit nail on the head.

    Physical latencies – seek-time, basically – is really the only thing that matters here.   Although the operating-system’s buffer pools will mitigate the seeking somewhat, you still want to hold that to a minimum ... taking care that I/O doesn’t sneak up on you from the backside in the form of virtual-memory paging.

    I’m personally not sure that threads would help here ... although I do yield to your greater expertise on this matter ... but I think that simply buffering a few hundred thousand extracted records, e.g. in an array of hashrefs, just might hit pay dirt.   Instead of writing each record out immediately, push it onto an array (of hashrefs).   When the array reaches some agreed-upon and easily-tweakable threshhold, pause to go shift them all off and write them to the output file, then continue reading.   (The read/write head moves from over-here to over-there, stays there a while to write the stuff out, then moves back.)   Set the threshhold to some point where you can be fairly sure that there will be enough physical RAM available to hold everything without paging.   I suspect that you will be astonished at what just-this does for the program.

      taking care that I/O doesnít sneak up on you from the backside in the form of virtual-memory paging.

      You and your hobby horses. Virtual memory limits don't enter into it.

      When running, those two programs I posted use 2.7 MB and 3.2 MB respectively when using the 256 times larger read size than standard that I suggest. Even if I increase that 10-fold to 10MB each -- which slows the processing down -- they use a whole 12 MB & 16 MB. The programmer that runs my heating system could handle that.

      Iím personally not sure that threads would help here

      I'm not sure that standing on one leg whilst drinking blood from a freshly decapitated chicken would help; so I don;t mention it.

      And your instincts prejudiced guessing is wrong!

      Approximately 2/3rds of the throughput gains from my posted solution come exactly because the CPU intensive part of the processing -- the spliting and joining of the records -- can run flat out (100% utilisation) on one CPU, whilst the first thread doing the Input is instantly ready to respond to the completion of each read because it isn't having to perform any CPU intensive processing on each record.

      push it onto an array (of hashrefs).

      The input is a stream; the output is a stream; What on earth do you need an array of hashrefs for?

      I suspect that you will be astonished at what just-this does for the program.

      No. I can pretty much call it without trying it. It will run 3 to 5 times slower than a simple:

      perl -F"\t" -anle"print join chr(9), @F[2,0,5]" in.tsv > out.tsv

      That is, instead of taking the OPs 5+ hours to run it will take closer to 24 hours.

      Why? You'll never know unless you try it. And you won't do that.

      (And even if you get someone else to do it for you, you won't post the results, because it will show your 'advice' to be nothing more than groundless guessing.)


      With the rise and rise of 'Social' network sites: 'Computers are making people easier to use everyday'
      Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
      "Science is about questioning the status quo. Questioning authority".
      In the absence of evidence, opinion is indistinguishable from prejudice.

        Bah, humbug ...   ;-)

        There’s one million-pound elephant in this room, if there is one at all, and that elephant certainly won’t consist of the mere-microseconds that a CPU will require to split-up a chunk of data that came in from a disk drive.   We’re not reading graphic-images here and resizing them ... the task to be performed against each unit of data is quite inconsequential.   The advantage comes from being able to gulp large amounts of data at a time from a disk device that (probably ...) does not have to spend too much time dong it.   (Although, depending on the peculiarities of physical distribution of the file-in-question across the disk real estate, that might not turn out to be so.)

        “What is the array of hashrefs for?”   Why, you yourself said it!   Even though “the input is a stream, and the output is a stream,” there is probably only one device.   A physical operation consisting of seek + read or seek + write, which we desire to reduce to the point that no seek is required.   We want:   seek($$) + read + read + read + ... + read + seek($$) + write + write .... + write + seek ($$).

        The specific reason why I questioned the validity of threads in this scenario, was that I judged the amount of CPU-intensive processing required to be pragmatically inconsequential relative to the amount of time that would be spent in an I/O-wait.   Also, it was because I judged that the physical latency cost of the disk device would be the “above all, ruling constraint” no matter what.   Pragmatically, we can afford to spend a few microseconds crunching numbers because the disk platter won’t have rotated too far during that time anyway.

        But there is only so far that the point can be argued before it becomes merely an argument.   If the per-record CPU load is indeed slight to the point of being inconsequential, then memory is a big fat available buffer that costs nothing ... if, indeed paging is not occurring, so that it really does cost “nothing.”   But if a page-fault that must be resolved to disk does take place, then you just moved that read/write head after all.   Overlapping I/O with CPU-processing might prove to be beneficial on the OP’s system, not yours... or not.   If it were me, I would start by using the RAM as a buffer, see how much bang that buck got me, then pragmatically move forward from there.

        It doesn’t advance the technical validity of your arguments to belittle the opinions others, you know... no matter how personally self-assured you might be.

      It turns out that BrowserUK's approach was an order of magnitude quicker than the other approaches presented here. I took the liberty of coding up Sundialsvc4's suggestion of buffering a few hundred thousand lines worth of data before printing them out to a file. I used array refs since there was no reason to use hash refs.

      I used a Powershell script to time the different approaches.

      echo 'BrowserUK 1024' > bench.txt Measure-Command { .\buk.bat } >> bench.txt echo 'BrowserUK 2048' >> bench.txt Measure-Command { .\buk2048.bat } >> bench.txt echo 'Sundial approach by Lotus1' >> bench.txt Measure-Command { perl sundial.pl test.csv > dialout.csv} >> bench.txt echo spacebar >> bench.txt Measure-Command { .\sed -n "s/^\([^\t]*\t\)[^\t]*\t\([^\t]*\t\)[^\t]*\ +t[^\t]*\t\([^\t]*\)\t.*/\2\1\3/p" test.csv > spaceout.csv} >> bench.t +xt echo 'kenosis/choroba' >> bench.txt Measure-Command { perl kc.pl test.csv > kcout.csv} >> bench.txt echo 'mildside -- wrong order but testing anyway' >> bench.txt Measure-Command { .\cut '-f1,3,6' test.csv > cutout.csv} >> bench.txt

      Here are the results using a 1Gb test file on an idle server with 16 cores and 8Gb RAM: update: (Windows Server 2008 R2)

      BrowserUK 1024 Minutes : 1 Seconds : 54 Milliseconds : 103 Ticks : 1141033211 BrowserUK 2048 Minutes : 1 Seconds : 55 Milliseconds : 124 Ticks : 1151241665 Sundial approach by Lotus1 Minutes : 21 Seconds : 53 Milliseconds : 28 Ticks : 13130283183 spacebar Minutes : 21 Seconds : 24 Milliseconds : 215 Ticks : 12842154788 kenosis/choroba Minutes : 22 Seconds : 4 Milliseconds : 865 Ticks : 13248658887 mildside -- wrong order but testing anyway Minutes : 22 Seconds : 19 Milliseconds : 295 Ticks : 13392954883

      Here is the sundialsvc4 approach that I put together for the test:

      #! perl -sw use strict; my $count = 0; my @lines; while( <> ) { $count++; my @f = ( split /\t/, $_, 7 )[ 0, 2, 5 ]; push @lines, \@f; if( $count >= 300000 ) { #size of buffer $count = 0; print_buffer( \@lines ); } } print_buffer( \@lines ); sub print_buffer { my ($aref) = @_; foreach (@$aref) { print join( "\t", @$_ ) . "\n"; } splice( @$aref ); }

      Here is the Kenosis/choroba approach.

      #! perl -sw use strict; #kenosis/choroba approach while( <> ) { print join( "\t", ( split /\t/, $_, 7 )[ 0, 2, 5 ]), "\n"; }

        Great job there Lotus1.

        I'm curious about the use of splice to clear the array in your code, as below:

            splice( @$aref );

        I would probably have used the below:

            @$aref = ();

        Is splice faster or better in some other way?

Re^2: selecting columns from a tab-separated-values file
by Lotus1 (Chaplain) on Jan 22, 2013 at 17:03 UTC

    I just tried your solution with an input file of 100,000 lines but I only got 65537 lines of output. Here is the command line I used to run the test:

    perl ibuf.pl -B=1024 < testdata.csv | perl obuf.pl -B=1024 > out.csv

      Damn! I missed/dropped the last line of obuf.pl when I C&P's the code:

      print $obuf;

      Now corrected above, Thank you.


      With the rise and rise of 'Social' network sites: 'Computers are making people easier to use everyday'
      Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
      "Science is about questioning the status quo. Questioning authority".
      In the absence of evidence, opinion is indistinguishable from prejudice.
Re^2: selecting columns from a tab-separated-values file
by ibm1620 (Beadle) on Jan 22, 2013 at 23:10 UTC
    My Perl program was consuming 100% of one CPU core, leading me to think it was unlikely to be I/O-bound.

      The construction of the arrays from the split lines consumes a fair amount of CPU. As does the allocation of the buffers for reading into and writing from. As does the searching of the 4K buffers to locate the line ends. And the search of the lines to find the tabs.

      But still, the minimum elapsed time is dominated by the time it takes to a) read 80 GB from disk; whilst simultaneously writing ~5 GB back to disk. On my system 2.6 GHz 4-core Q6600 + 7200 RPM SATA2 disk with 16MB ram, that reading and writing process -- with no other processing; searching; memory allocations etc. -- requires a minimum of 56 minutes.

      The difference between that and your 5 hours comes in two parts.

      1. The internal processing of split & join.
      2. The waiting for IO.

      If you can overlap the IO and the internal processing, you can claw back some of that difference.

      Have you tried my two processes solution?


      With the rise and rise of 'Social' network sites: 'Computers are making people easier to use everyday'
      Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
      "Science is about questioning the status quo. Questioning authority".
      In the absence of evidence, opinion is indistinguishable from prejudice.

        I've not yet tried the two-process solution, but I intend to. One concern I have is that, at some point fairly early on, it seems like the "pipeline" would get full, at which point the two processes would have to operate in lock-step.

        Another thought I've had is that, since the process appears to me to be CPU-bound, it might be worth forking several children and distributing the work across them. Each child would have to write to a separate output file, which admittedly would increase the possibility of disk head thrashing, but I think it's worth a try.

        I'm back at work and have tested the two-process solution. It took 60 seconds to pass 10M (M=million) records. Then I pulled the logic for splitting and joining the records out of obuf and into ibuf (thus eliminating obuf) and ran the same test, and it ran in 62 seconds. (In both cases the output was to /dev/null.)<\p>

        I reran the tests sending output to an actual file in the same directory as the input, and obtained exactly the same runtimes.

        In ALL cases I observed the CPU of the process that was doing the split/join to peg at 100%.

        So I have to conclude that disk I/O is negligible for this program, in my environment.
Re^2: selecting columns from a tab-separated-values file
by Anonymous Monk on Jan 23, 2013 at 09:33 UTC

    Read and write larger chunks of the file to minimise the number of seeks the read heads need to make.

    Sometimes there's the luxury of having two hard drive spindles. Use one for reading, the other for writing, and find your I/O-bound application much sped up. (It works even when the other spindle is over a network share.) Or the other kind of luxury of having an SSD: costless seeks.

Log In?
Username:
Password:

What's my password?
Create A New User
Node Status?
node history
Node Type: note [id://1014614]
help
Chatterbox?
and the web crawler heard nothing...

How do I use this? | Other CB clients
Other Users?
Others avoiding work at the Monastery: (6)
As of 2014-07-26 09:10 GMT
Sections?
Information?
Find Nodes?
Leftovers?
    Voting Booth?

    My favorite superfluous repetitious redundant duplicative phrase is:









    Results (175 votes), past polls