go ahead... be a heretic | |
PerlMonks |
Re: search/grep perl/*nixby shmem (Chancellor) |
on Nov 25, 2017 at 19:19 UTC ( [id://1204261]=note: print w/replies, xml ) | Need Help?? |
I'd have thought that writing to a file and reading it back would have slowed me down, but it didn't! There's only a difference in the filehandle types involved. In the first, the shell opens/closes $tmpfile, in the second, it opens/closes a pipe attached to the perl side pipe filehandle created by qx (which perl creates anyways), so it is no surprise there is no difference, specially if you are working with a SSD instead of an old washing machine type of disk drums (modern disks might hold the entire file in the controller cache, so perl can read the file even before it is allocated physically via magnetism). It would be more interesting to benchmark the shell chain against a pure perl solution, in which case perl loses here. Why? Because allocating the necessary data structures in perl means some overhead, whereas the cut uniq sort utilities deal only with char arrays[1], are seasoned and thus optimized for their specific tasks. Here's a file of ~132MB, one million records, created with
and a quick shot at timing:
This could make a difference with huge files. I haven't looked at the memory footprint, which might be another clue for deciding for or against a (dogmatic) "pure perl solution". [1] afaik those utilities are UTF-8 agnostic
perl -le'print map{pack c,($-++?1:13)+ord}split//,ESEL'
In Section
Seekers of Perl Wisdom
|
|