http://www.perlmonks.org?node_id=938650


in reply to "Just use a hash": An overworked mantra?

When you are dealing with “huge” amounts of data, everything depends upon ... memory.   Do you have it, or do you not.   (And if it’s “virtual,” you don’t have it.)

Many computers these days have truly vast amounts of RAM and there are a roomful of computers thusly equipped.   Under these circumstances, the chances are quite good that a tremendous structure can be built in RAM and that all of those pages will be (and will remain) present.   If that is known to be the case, then “in-memory” solutions work just fine, and yes they do behave very nicely as BrowserUK points out (with his characteristic I-love big-fonts flair ...).

What becomes truly insidious about “in-memory” solutions, especially those based upon random-access data structures such as hashes, is when virtual-memory is constrained such that the entire block of data cannot fit into available physical RAM without incurring page faults.   A hash data-structure does not exhibit any locality of reference; quite the opposite.   Any reference to that structure could (worst-case) incur a page-fault, which suddenly transforms the entire algorithm from what you think is a fast, virtually I/O-free operation, into one that hammers your paging-device to death and brings the entire system to a screeching halt along with it.

If you plot the performance curve of a virtual-storage system as the stress which is placed upon it increases, you will observe a line that basically increases in a nice, more-or-less linear fashion u-n-t-i-l it “hits the wall,” the so-called thrash point.   At this instant, the performance curve suddenly becomes exponential.   And that, as I’ve said before (from Ghostbusters), is “real wrath-of-God stuff.”

BrowserUK is therefore entirely correct as long as you are well away from the thrash-point.   (And today, you might well be able to “throw cheap silicon at it” and thereby avoid the thrash-point entirely.   There is a reason why we have 64-bit systems now; soon to be 128.   Chips are cheap.)   But the punishment that can be inflicted, when and if it happens, is severe because it is exponential.

In passing ... it is quite interesting that sorting a multi-million record file should take “ten minutes,” which is quite inexcusable.   There are interesting-looking articles here and also here.   Also specifically to our point, A Fresh Look at Efficient Perl Sorting, although it does not concern disk-sorts.

A similar situation can happen with regard to accessing indexed files.   Once again we are dealing with a random-access data structure which may require some n physical I/O operations to retrieve the data, and which rewards locality-of-reference by virtue of cacheing recently-used index pages in RAM while discarding others.   Once again we have the “thrashing” phenomenon, albeit of a different kind and source.   Plentiful memory tends to mask the problem once again.   (Operating systems will dedicate leftover memory to file-buffering when there is no other competition for the space.)

When and if you hit a thrash-point problem, you will know.   The difference can be a matter of many hours, or the difference between a job that finishes and one that does not.   “Ten minutes” (or more...) becomes an acceptable price to pay when for example you are talking about a massive runs-through-the-night production batch job.   And those, really, are the kind of situations I am talking about.   Not the size of problem that can be effectively dealt-with by buying more chips.   Obviously, “if you’ve got the RAM, flaunt it.”

Replies are listed 'Best First'.
Re^2: "Just use a hash": An overworked mantra?
by BrowserUk (Patriarch) on Nov 17, 2011 at 22:04 UTC

    An array to hold 1000 integers requires 32k:

    @a = 1 .. 999;; print total_size \@a;; 32144

    A hash to hold 1000 keys & integer values requires 100k:

    $h{ $_ } = $_ for 1.. 999;; print total_size \%h;; 109055

    So, on a machine with less memory than some musical birthday cards, the hash or array one-liners will perform this task efficiently, and scale linearly for files containing at least 18,446,744,073,709,551,616 lines.

    To put that into perspective, it represents a single data file containing 16 Exabytes. Or approximately a million times more data than Google's entire storage capacity. If the OP can afford the amount of disk required to hold the file, it seems very unlikely that he'll have any trouble affording the 100k of ram.

    In the meantime, it would take a computer running at 10Ghz and able to perform 1 comparison per clock cycle, 3,741 years to sort that file, assuming no other time costs including IO or memory.

    So do the world a favour, and apply a little, the merest modicum, of thought to the problem at hand, before trotting out your olde worlde compooter wisdoms. Regurgitating received knowledge, long since superseded, as a substitute for actually thinking about the problem, does no one any good.


    With the rise and rise of 'Social' network sites: 'Computers are making people easier to use everyday'
    Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
    "Science is about questioning the status quo. Questioning authority".
    In the absence of evidence, opinion is indistinguishable from prejudice.

      For this problem, in which the known solution-space is constrained to what can fit into a reasonably sized hash and in which the total number of records and data-streams also fits into memory ... a memory-based solution works just fine, and there is utterly no reason to trundle out n-digit numbers to “prove” your point.

      My original comment, which I said even at that time was ancillary to the original discussion, is that there do exist other classes of problems which for various reasons do not lend themselves well to the “random-access based” (and to “memory-based”) approaches that might occur to you on first-blush.   This might not be one of those cases, but it does not invalidate the fact that such problems do exist.   In those problems, the incremental costs of virtual-memory activity become a death by a thousand cuts.   A fundamental change of approach in those cases transforms a process that runs for days, into one that runs in just a few hours.   I have seen it.   I have done it.   “Batch windows” are a reality for certain common business computing jobs.   Last year I worked on a system that processes more than a terabyte of new data, assimilated from hundreds of switching stations, every single day, and this was the change that gave them their system back.

      I was really, really hoping that in this case you wouldn’t rush out once again to prove how smart you are.   Let alone, as so many times before, publicly and at my expense.   Enough.

        For this problem, in which the known solution-space is constrained to what can fit into a reasonably sized hash and in which the total number of records and data-streams also fits into memory ... a memory-based solution works just fine,

        Ignoring the silly bit about "records and data-streams" fitting in memory. Exactly!

        My mother had a brilliant solution to the problem of grease stains on carpets that involved brown paper and an iron; but you don't see me trotting out here at random.

        there is utterly no reason to trundle out n-digit numbers to “prove” your point.

        Beg to differ. There was a reason.

        Your continued insistence to trot out the description of something that might prove to be a suitable to solution to some other problem at some other place and time gave me that reason.

        I was really, really hoping that in this case you wouldn’t rush out once again to prove how smart you are. Let alone, as so many times before, publicly and at my expense.

        The only smarts involved, is your exhibited lack thereof in posting inappropriate solutions to questions.

        The only expense involved, is the time wasted by recipients of your "wisdoms", as they chase down blind alleys following them.

        Enough.

        We found something we can agree on.


        With the rise and rise of 'Social' network sites: 'Computers are making people easier to use everyday'
        Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
        "Science is about questioning the status quo. Questioning authority".
        In the absence of evidence, opinion is indistinguishable from prejudice.
Re^2: "Just use a hash": An overworked mantra?
by Tanktalus (Canon) on Nov 17, 2011 at 21:06 UTC
    Obviously, “if you’ve got the RAM, flaunt it.”

    Well, if you say so... :-D

    I found it interesting, so I tried some of BrowserUk's test scripts. I populated the rands.dat file - 0m48.948s. Then I loaded it into a hash - 0m37.042s. The array was significantly faster - 0m28.708s. Loading the data into an array took a fair bit of RAM, but since I am only using roughly 7GB of 12GB, I didn't encounter any swapping - 0m59.864s. Going on the first numbers, I must have something faster in my system already. However, not ten times faster. If you don't have the RAM, you may need to get it. :-)

Re^2: "Just use a hash": An overworked mantra?
by hbm (Hermit) on Nov 17, 2011 at 17:40 UTC
    (with [BrowserUK's] characteristic I-love big-fonts flair ...)

    How very “ironic”, or, as one “path through the wood” might reveal, Very Ironic™.