http://www.perlmonks.org?node_id=999868


in reply to Re^2: to thread or fork or ?
in thread to thread or fork or ?

An architecture:

Split your bigfile size across N machines.

Have a process on each machine that processes a filesize/N chunk of the bigfile. (Say, 32 machines each reading a different 32GB chunk of your 1TB file.)

Each reader accumulates word counts in a hash until the hash size approaches it's memory limit.

(Assume ~1.5GB/10 million words/keys on a 64-bit Perl; somewhat less on a 32-bit.)

When that limit is reached; it posts (probably udp) out the word/count pairs to the appropriate accumulator machines; frees the hash and continues reading the file from where it left off.

No threading, shared memory or locking required. Simple to set up and efficient to process.


With the rise and rise of 'Social' network sites: 'Computers are making people easier to use everyday'
Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
"Science is about questioning the status quo. Questioning authority".
In the absence of evidence, opinion is indistinguishable from prejudice.

RIP Neil Armstrong

Replies are listed 'Best First'.
Re^4: to thread or fork or ?
by sundialsvc4 (Abbot) on Oct 19, 2012 at 14:21 UTC

    Would you please elaborate on this good-idea a bit further?   Specifically, the part about the UDP transmission of the word/count pairs.   Somehow the words have to be parceled-out among the various machines who are counting them.   How would you approach that piece of your scenario, such that the distribution is fair among the “hundreds or thousands of” nodes and yet we can be sure (although using UDP) that every pair is in fact counted or considered by someone.   Also, what sort of UDP/TCP network bandwidth is this scenario relying upon?

      Assuming, based upon the OPs description:

      1 TB file containing approximately 100 billion 11-characters words evenly distributed. (On average, 1000 occurrences of each of 100 million unique words.)

      and the following (relatively modest) hardware for the cluster/cloud hardware:

      1. File on raided drive array with 600MB/s sustained throughput.
      2. Commodity hardware with 16GB memory per machine.
      3. 1Gb/s network infrastructure.

      Each of 32 readers processes a different 32GB chunk of the file. Assume the disk bandwidth is evenly allocated, each reader takes 32GB/(600/32)MB/s) = just 30 minutes to read its chunk. (Assume the extraction/hashing/accumulation of counts can be done in the time between reads.)

      With 16GB of ram, each machine will be able to accumulate all 100 million key/count pairs, before it needs to flush its counts to the accumulators. Each key/count pair packet consists of 11-character word plus a (way oversized) 7 digit count plus a 2 character terminator, for a 20 bytes packet + 8-bytes packet overhead = 28 bytes. Each flush therefore requires 2.6GB of data packets be transmitted. Times 32 readers = 84GB total transmissions.

      At 1Gb/s (call that 100MB/s for simplicity), gives 852 seconds or a little under 15 minutes of transmission time.

      Job done is somewhat less than 1 hour.


      With the rise and rise of 'Social' network sites: 'Computers are making people easier to use everyday'
      Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
      "Science is about questioning the status quo. Questioning authority".
      In the absence of evidence, opinion is indistinguishable from prejudice.

      RIP Neil Armstrong