Beefy Boxes and Bandwidth Generously Provided by pair Networks
Your skill will accomplish
what the force of many cannot
 
PerlMonks  

Re: Re: character-by-character in a huge file

by BrowserUk (Patriarch)
on Apr 13, 2004 at 12:14 UTC ( [id://344697]=note: print w/replies, xml ) Need Help??


in reply to Re: character-by-character in a huge file
in thread character-by-character in a huge file

a) see what I'm doing wrong,

The first thing you are doing wrong is that you are comparing apples and oranges. Take your 2nd benchmark.

cmpthese( 10, { slurp_substr => sub { open (FH, "<$filename"); my $i = 0; while ( <FH>) { while ($ch = substr($_,$i++,1)){ } } close FH; }, slurp_simpleregex => sub { my $len=0; open (FH, "<$filename"); while ( <FH>){ $_ =~ /(.)$/; } close FH; }, slurp_length => sub { my $len=0; open (FH, "<$filename"); while ( <FH>){ $len += length($_); } close FH; }, });
  1. Slurp_substr()

    This reads the whole file, record-by-record, and then appears to set the (global) variable $ch to each character in each record.

    But, your setting the variable $i outside the main loop; incrementing it for each char in the record; but never resetting it.

    Hence, for the second and subsequent records, $i will have whatever value it had at the end of the previous record. If the first record is longer than the rest, it will do nothing for any record other than the first.

    Both slurp_substr() and raw_slurp_substr() routines in the 1st benchmark are similarly afflicted.

  2. slurp_simpleregex()

    Your regex says put the last character of each record into $1. Your simply ignoring every character except the last in each record.

  3. slurp_length()

    This is the most mysterious of all. You read each record in the file and accumulate the lengths of those records into the variable $len.

    You never access any of the characters?

The first rule of benchmarking is that you need to ensure that you are comparing like with like.

The second rule is that you need to make sure that what you are benchmarking is useful and relevant to your final program.

In these tests, you are doing neither.


That's much worse than what I've seen (but haven't tested here) in C.

If your point is that Perl is slower than C. Your right.


Examine what is said, not who speaks.
"Efficiency is intelligent laziness." -David Dunham
"Think for yourself!" - Abigail

Replies are listed 'Best First'.
Re: Re: Re: character-by-character in a huge file
by mushnik (Acolyte) on Apr 13, 2004 at 15:49 UTC
    The "failing to reset $i" bug is duely noted. The impact of moving "my $i=0" inside the "while (<FH>)" line is simply that each of those cases slows down, leaving me with the results:

    s/iter raw_slurp_substr nonraw_sysread_onechar +raw_sysread_onechar getc slurp_substr nonraw_sysread_buffer slurp_reg +ex raw_sysread_buffer raw_slurp_substr 3.68 -- -16%-17% -34% -53% -57% -60% -62% nonraw_sysread_onechar 3.08 19% -- -1% -22% -45% -49% -53% -54% raw_sysread_onechar 3.06 20% 1% -- -21% -44% -48% -52% -54% getc 2.42 52% 27% 26% -- -29% -35% -40% -42% + slurp_substr 1.71 115% 80% 79% 41% -- -8% -15% -18% nonraw_sysread_buffer 1.58 133% 95% 93% 53% 8% -- -8% -11% slurp_regex 1.46 152% 111% 110% 66% 17% 8% -- -4% raw_sysread_buffer 1.41 161% 119% 117% 72% 22% 12% 4% -- Rate raw_sysread_buffer slurp_length slurp_simpleregex raw_sysread_buffer 0.706/s -- -98% -98% slurp_length 31.2/s 4328% -- -9% slurp_simpleregex 34.5/s 4786% 10% --

    But I'm not sure how this is apples and oranges. There are two benchmark tests here:

    In the first, after the bug fix, the raw/sysread buffer approach works best, but is only about 10% better than just slurping in the contents with (<FH>). (and the raw/sysread_onechar approach is actually worse than getc). In general, your final result shows improvement, but it's not as fantastic as I'd hoped. Perhaps this is a function of the OS in use (such dramatic differences between your results and mine suggest that you may be using Windows (I'm on Linux)...and that getc may be really terrible in Windows - is that right?). I'd be interested in seeing the results you get when you run the same benchmark (after fixing the $i bug you mention).

    In the second benchmark, my point is a bit more interesting (to me) than simply saying that Perl is slower than C. I've reposted, comparing my two "fast" approaches to raw_sysread_buffer. The point of &slurp_length and &slurp_simpleregex is to show that the thing that makes &raw_sysread_buffer (and the others) so remarkably slow is not the actual act of reading from disk, but the act of accessing the values one at a time. For example, the regex test simply aims to show that I must have read the entire block from disk (I got the last character). In these tests, I'm not meaning to say that Perl is slower than C, I'm saying that Perl (as I'm using it) is unbearably slower than expected (by me).

    This amazing slowness in this one application is surprising to me, because I've generally found Perl to be pretty darned fast. This has been especially true in dealing with text (i.e. regex).


    A couple small notes:

    I have no interest in writing this in C. Perl is my preferred language, and it was my intention to show the C-lovers I work with that Perl is a perfectly good tool for this sort of task. I'm having a (much) harder time proving that than I'd hoped I would. Perhaps I'm wrong :(

    I also have no intention of flaming you with my response. It's clear to me that you've taken a good deal of time to think about my problem, and I'm most appreciative of that time. The intention of my response is simply to show that my benefits don't match your expectations, and to see if you can suggest another approach.

      Three questions:

      1. Your files are in FASTA format?
      2. Is there a maximum size for individual records?
      3. When processing the records byte by byte, how do you intend treating the inter-record newlines?

      Examine what is said, not who speaks.
      "Efficiency is intelligent laziness." -David Dunham
      "Think for yourself!" - Abigail
        "Your files are in FASTA format?"
        Yes.

        "Is there a maximum size for individual records?"
        A few MB. I can't be more specific than that, because I haven't seen all the files in the world...but generally, the largest scaffolds I've seen are a coupe MB.

        "When processing the records byte by byte, how do you intend treating the inter-record newlines?"
        Reasonable question. I just skip them. This is handled in a function call to "get_next_char", which really gets the next char about which I care. Also, see a ">", skip until the newline.

Log In?
Username:
Password:

What's my password?
Create A New User
Domain Nodelet?
Node Status?
node history
Node Type: note [id://344697]
help
Chatterbox?
and the web crawler heard nothing...

How do I use this?Last hourOther CB clients
Other Users?
Others meditating upon the Monastery: (6)
As of 2024-03-28 08:51 GMT
Sections?
Information?
Find Nodes?
Leftovers?
    Voting Booth?

    No recent polls found