http://www.perlmonks.org?node_id=999861


in reply to Re: to thread or fork or ?
in thread to thread or fork or ?

Thanks for your suggestions/comments. To be a bit more forthcoming I should describe something closer to the real problem. The input data set can be 100s of Gb (up to perhaps Tb) and the "words" are states of a system that can number in the billions (it's actually closer to the CERN problem someone suggested than the baby example I gave). The full frequency spectrum won't fit in memory on a single node, and to this point I've been doing a Perl solution where the "word-space" is roughly load-balanced across hundreds to thousands of cores each reading the entire dataset and only keeping track of the frequencies of their portion of the full spectrum. The trouble with this is the I/O bandwidth requirement is huge and I'm trying to reduce that by having each node in my cluster only read the data once and share it among all of the threads running on that node, instead of the current solution where every process needs to read the entire dataset. Does this change things at all?

Replies are listed 'Best First'.
Re^3: to thread or fork or ?
by BrowserUk (Patriarch) on Oct 19, 2012 at 04:59 UTC

    An architecture:

    Split your bigfile size across N machines.

    Have a process on each machine that processes a filesize/N chunk of the bigfile. (Say, 32 machines each reading a different 32GB chunk of your 1TB file.)

    Each reader accumulates word counts in a hash until the hash size approaches it's memory limit.

    (Assume ~1.5GB/10 million words/keys on a 64-bit Perl; somewhat less on a 32-bit.)

    When that limit is reached; it posts (probably udp) out the word/count pairs to the appropriate accumulator machines; frees the hash and continues reading the file from where it left off.

    • The file is only processed once.
    • It can be processed in parallel by as many reader processes/machines as your IO/Disk bandwidth will allow.
    • Each reader process only uses whatever amount of memory you decide is the limit.
    • You can split the accumulations across as many (or few) boxes as are required.
    • As only word/count pairs are exchanged, IO (other than reading the file) will be minimal.

    No threading, shared memory or locking required. Simple to set up and efficient to process.


    With the rise and rise of 'Social' network sites: 'Computers are making people easier to use everyday'
    Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
    "Science is about questioning the status quo. Questioning authority".
    In the absence of evidence, opinion is indistinguishable from prejudice.

    RIP Neil Armstrong

      Would you please elaborate on this good-idea a bit further?   Specifically, the part about the UDP transmission of the word/count pairs.   Somehow the words have to be parceled-out among the various machines who are counting them.   How would you approach that piece of your scenario, such that the distribution is fair among the “hundreds or thousands of” nodes and yet we can be sure (although using UDP) that every pair is in fact counted or considered by someone.   Also, what sort of UDP/TCP network bandwidth is this scenario relying upon?

        Assuming, based upon the OPs description:

        1 TB file containing approximately 100 billion 11-characters words evenly distributed. (On average, 1000 occurrences of each of 100 million unique words.)

        and the following (relatively modest) hardware for the cluster/cloud hardware:

        1. File on raided drive array with 600MB/s sustained throughput.
        2. Commodity hardware with 16GB memory per machine.
        3. 1Gb/s network infrastructure.

        Each of 32 readers processes a different 32GB chunk of the file. Assume the disk bandwidth is evenly allocated, each reader takes 32GB/(600/32)MB/s) = just 30 minutes to read its chunk. (Assume the extraction/hashing/accumulation of counts can be done in the time between reads.)

        With 16GB of ram, each machine will be able to accumulate all 100 million key/count pairs, before it needs to flush its counts to the accumulators. Each key/count pair packet consists of 11-character word plus a (way oversized) 7 digit count plus a 2 character terminator, for a 20 bytes packet + 8-bytes packet overhead = 28 bytes. Each flush therefore requires 2.6GB of data packets be transmitted. Times 32 readers = 84GB total transmissions.

        At 1Gb/s (call that 100MB/s for simplicity), gives 852 seconds or a little under 15 minutes of transmission time.

        Job done is somewhat less than 1 hour.


        With the rise and rise of 'Social' network sites: 'Computers are making people easier to use everyday'
        Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
        "Science is about questioning the status quo. Questioning authority".
        In the absence of evidence, opinion is indistinguishable from prejudice.

        RIP Neil Armstrong