Clear questions and runnable code get the best and fastest answer |
|
PerlMonks |
Re^3: Searching large files a block at a timeby kcott (Archbishop) |
on Aug 02, 2017 at 06:52 UTC ( [id://1196514]=note: print w/replies, xml ) | Need Help?? |
"... helped me understand what I was doing wrong ..." OK, that's a good start. "Using your code, I get results in ~10 seconds, which is acceptable (though still a lot slower than the shell script that pipes into Perl, and I'm not sure why that is). " I'm completely guessing but the overhead may be due to the IO::Uncompress::Bunzip2 module. You could avoid using that module by setting up the same pipe but from within the Perl script (rather than piping to that script). I put exactly the same data I used previously into a text file (just a copy and paste):
I then modified the start of my previous example code, so it now looks like this:
This produces exactly the same output as before. Obviously, you'll want to change 'cat' to '/usr/bin/bzcat' (and, of course, use *.bz2 instead of *.txt files). This solution will not be platform-independent: that may not matter to you. See open for more on the '-|', and closely related '|-', modes. Also, note that I used the autodie pragma. If you want more control over handling I/O problems, you can hand-craft messages (e.g. open ... or die "..."), or use something like Try::Tiny. — Ken
In Section
Seekers of Perl Wisdom
|
|