|Pathologically Eclectic Rubbish Lister|
Re^3: Reading files, skipping very long lines...by pjf (Curate)
|on Sep 30, 2005 at 01:03 UTC||Need Help??|
All the suggestions so far have been fantastic, and it sounds like all you really need now is a very-fast 'discard line' subroutine.
Be aware that regardless of how efficient your code may be, you'll be limited by the speed of the I/O operations provided by your operating system. If you've got to read 380Mb from disk, that's going to take some time regardless of how you process it.
If possible, set your program running and take a look at what your system is doing. If you're on a unix-flavoured system, then top and time can help a lot. If you're hitting 100% CPU usage, and a lot of that is in userland time, then a tigher reading-loop may help. If you're not seeing 100% CPU usage, or you're seeing a very high amount of system time, then you're probably I/O bound. You'll need faster disks, hardware, and/or filesystems for your program's performance to improve.
Assuming that you are CPU bound, you can potentially write your 'discard line' subroutine in C, which allows it to be very fast and compact. Here's an example using Inline::C
I haven't benchmarked that, but it should be both very memory efficient and fast. Be aware the of the problem that you will encounter if skip_line() hits EOF before a newline; unless you're very sure of your input file you'll want to improve upon the sample code provided here.
If you do benchmark, keep in mind that any caching by the CPU may make a significant difference to your end results.
All the very best,
Perl Training Australia