in reply to Re: multicolumn extraction
in thread multicolumn extraction

Nicely done, Dave. However, since we don't know the number of columns or the file size, do you think it would be better to limit the split to only the needed columns, as in the following?

my @columns = (split /\t/)[1 .. 2];

Depending on these factors and the machine, the script might otherwise choke.

Just a thought...


Now splitting on /\t/ based upon sauoq's good catch in his comment below.

Replies are listed 'Best First'.
Re^3: multicolumn extraction
by sauoq (Abbot) on Jun 03, 2012 at 17:05 UTC
    Depending on these factors and the machine, the script might otherwise choke.

    That's highly unlikely as the file is being handled line by line. And if there were a truly humongous line, your modification actually wouldn't be much better.

    And you've introduced a potential bug by splitting a tab delimited file on whitespace instead of on tabs.

    "My two cents aren't worth a dime.";

      Your reply makes sense, sauoq. I see that I assumed too much by the OP's field representations as not containing any spaces, so it would be best to split on the known field delimiter. Indeed, it would be disastrous if the first field contained spaces. Good catch and thank you for bringing this to my attention.


      I was curious to see whether there was any speed difference between spliting all or spliting some columns, so I ran the following which creates and splits a 20 column x 10000 row file:

      use Modern::Perl; use Benchmark qw(cmpthese); my $entry = "aaaaaaaaaaaaaaaaa"; my $columnsFile = 'columns.txt'; open my $file, ">$columnsFile" or die $!; do { print $file "$entry\t" x 19; say $file $entry } for 1 .. 10000; close $file; sub splitAll { open my $file, "<$columnsFile" or die $!; while (<$file>) { my @columns = split /\t/; } close $file; } sub splitSome { open my $file, "<$columnsFile" or die $!; while (<$file>) { my @columns = ( split /\t/ )[ 1 .. 2 ]; } close $file; } cmpthese( -5, { splitAll => sub { splitAll() }, splitSome => sub { splitSome() } } );


      Rate splitAll splitSome splitAll 19.8/s -- -21% splitSome 25.1/s 27% --

      In this case, spliting only some shows a significant speed advantage--and with this relatively small file. I ran the script many times, getting as high as 31% for splitSome and as low as 21%--but always showing that splitSome is significantly faster.

        Significantly faster is on the order of a few tens of seconds if run several times a day over a long period, or about half an hour if run just once. For almost all practical use cases the trivial difference you demonstrate is just that - trivial. A, maybe useful, little extra juice can be squeezed out by stopping the split early rather than just slicing the result to avoid copying a few extra list elements:

        use strict; use warnings; use Benchmark qw(cmpthese); my $kFName = 'delme.txt'; test(); sub test { my $entry = 'a' x 18; open my $fOut, '>', $kFName or die "Can't create $kFName: $!\n"; print $fOut "$entry\t" x 19, "\n" for 1 .. 10000; close $fOut; cmpthese( -5, { splitAll => sub {splitAll()}, splitLimit => sub {splitLimit()}, splitSlice => sub {splitSlice()}, } ); } sub splitAll { open my $fIn, '<', $kFName or die "Can't open $kFName: $!\n"; while (<$fIn>) { my @columns = split /\t/; } close $fIn; } sub splitSlice { open my $fIn, '<', $kFName or die "Can't open $kFName: $!\n"; while (<$fIn>) { my @columns = (split /\t/)[1 .. 2]; } close $fIn; } sub splitLimit { open my $fIn, '<', $kFName or die "Can't open $kFName: $!\n"; while (<$fIn>) { my @columns = (split /\t/, $_, 4)[1 ..2]; } close $fIn; }


        Rate splitAll splitSlice splitLimit splitAll 5.60/s -- -36% -73% splitSlice 8.75/s 56% -- -59% splitLimit 21.1/s 276% 141% --

        However, even the worst performing variant is still so fast that it simple not worth worrying about even if you were running it several thousand times a day every day of the year. And not of these solutions is actually useful for parsing CSV. To do that in a reasonably robust way you should really use something like Text:CSV, which is about ten times slower than any of the benchmarked solutions, but has the huge advantage that it may actually give correct results for anything other than the trivial test data used by this test.

        True laziness is hard work