http://www.perlmonks.org?node_id=559615


in reply to Re^2: Matching in huge files
in thread Matching in huge files

Yep, it is very fast, but why is that better than this:
open(F, "<", $file) or die "$file: $!"; binmode(F); undef $/; # switch off end-of-line separating # read file in large chunks while (<F>) { while ( m/$re/oigsm ) { print "$1\n"; } } $/ = '\n'; # switch back to line mode close(F);

?

Thanks,
Tamas

Replies are listed 'Best First'.
Re^4: Matching in huge files
by dws (Chancellor) on Jul 07, 2006 at 00:16 UTC

    but why is that better than this: ...

    My fragment doesn't assume that the huge file will fit in memory, and it matches across read boundaries. Your approach sets up for a single-read slurp.

Re^4: Matching in huge files
by JadeNB (Chaplain) on Sep 05, 2008 at 18:25 UTC
    In addition to the answer that dws has already given, the original approach is better because it doesn't assume that the IRS was previously "\n", and it certainly doesn't put the IRS back as a literal \n (not a newline character, because of the single quotes). The usual idiom for changing $/ is to wrap any changes to it in a block, and then localise within the block.

    Also, maybe it's just my unfamiliarity with binmode, but I think that undef-ing the IRS means that the while loop only ever runs once.