http://www.perlmonks.org?node_id=938499


in reply to "Just use a hash": An overworked mantra?

One comment that I would make, also, (tangental to the immediate discussion though it be ...) is that unexpectedly-good results can be obtained by using “an old COBOL trick,” namely, use an external disk-sort to sort the file first.   Then, you can simply read the file sequentially.   Every occurrence of every value in every group will be adjacent ... just count ’em up until the value changes (or until end-of-file).   Any gap in the sequence indicates a complete absence of values, anywhere in the original file, that falls into that gap.   The amount of memory required is:   “basically, none.”

And, yes ... you can sort a file of millions of entries and you’ll be quite pleasantly surprised at how well even a run-of-the-mill disk sorter (or Perl module) can get the job done.   It isn’t memory intensive (although it will efficiently use what memory is available).   It is disk-space intensive, as it will create and discard many temporary spill-files, but moderately so.

The same technique is also ideally suited to comparing large files, or for merging them, because there is no “searching” to be done at all.   Merely sort all of the files the same way, and the process is once again sequential.

Replies are listed 'Best First'.
Re^2: "Just use a hash": An overworked mantra?
by BrowserUk (Patriarch) on Nov 17, 2011 at 02:11 UTC
    unexpectedly-good results can be obtained by using “an old COBOL trick,” namely, use an external disk-sort to sort the file first.

    How many times do you need to be told. No, they cannot! It takes at least 20 times longer!

    Proof.

    1. Using a hash takes 38.54 seconds:
      [ 1:27:09.40] c:\test>wc -l rands.dat 100000000 rands.dat [ 1:27:48.30] c:\test>perl -nlE"++$h[ $_ ]" rands.dat [ 1:28:27.24] c:\test>
    2. Just sorting the same file takes almost 10 minutes!:
      [ 1:29:54.32] c:\test>sort -n rands.dat >rands.dat.sorted [ 1:39:03.08] c:\test>

      And that before you run another process to perform the actual counting!

    To anyone with half a brain this is obvious.

    1. Using a hash requires:

      100e6 X the average time taken to read a record (IO).

      100e6 X the time taken to hash (H) the number and increment the value (i).

      ~Total time required: 100e6 * ( IO + H + I )

    2. Using a sort and then count (at least):

      100e6 X the average time taken to read a record (IOR).

      100e6 X log2( 100e6 ) = 2,657,542,476 X the time taken compare two lines (COMPL).

      100e6 X the average time taken to write a record (IOW).

      100e6 X the average time to read the sorted file (IOR)

      100e6 X the time taken to compare two lines (COMPL) + the time taken to increment a count (I) + the time taken to record that count (R).

      ~Total time required: 200e6*IOR + 100e6*IOW + 2,757e6*COMPL + 100e6*I + 100e6*R

      And that assumes that the whole dataset can be held and sorted in memory thus avoiding the additional, costly spill and merge stages. Which if it could there would be no point in using an external sort.

    And please note: This isn't a personal attack. I will respond in a similar fashion to anyone pushing this outdated piece of bad information. It only seems personal, because you keep on doing it!

    By now, I'm half tempted to believe you are only doing it to incur this response. But I dismiss that notion as it would require me to attribute you with some kind of Machiavellian intent, and I prefer to believe in Hanlon's Razor in these cases.


    With the rise and rise of 'Social' network sites: 'Computers are making people easier to use everyday'
    Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
    "Science is about questioning the status quo. Questioning authority".
    In the absence of evidence, opinion is indistinguishable from prejudice.