Not quite sure what you mean by the "basic functions", but map is a Perl primitive, and reduce is found in List::Util.

But the paper isn't about map or reduce. It's about distributing work over a large set of unreliable (read 'cheap') computers. Since the work is distributed over stand-alone computers, which sit in a ethernet network it means you don't have any significant I/O. (Oh, sure, you've got lots of TCP/IP bandwidth, but that's no where near enough to share memory (or even disks)). This means you can only successfully distribute work that can be divided into parts, where each part can be worked on without needed (intermediate) results of works on other parts. That's where the map/reduce comes in. It separates the work that can be done independently (map) from work that can't be done independently (reduce).

For instance, calculating the total number of links in a set of documents. Parsing each document and calculating the number of links in a single document can be done independently from parsing any other document (map phase). Note also that parsing a document is a task that can easily be restarted, without having to undo any other work - an important aspect for google, due to the nature of their infrastructure. The summing of all individual counts (reduce phase) can't be done independently (cause you need all results), but that's a relative small task compared to the parsing. (But you might recurse and sum the totals of groups of documents, then collect the results).

I've no doubt that you can write the implementation of MapReduce in Perl. (No doubt countless many people have already implemented this on a much smaller scale using threads on a single computer, but without the tolerance of failing components). I'm not sure if you want to though.


In reply to Re: Google's MapReduce by Anonymous Monk
in thread Google's MapReduce by BioGeek

Title:
Use:  <p> text here (a paragraph) </p>
and:  <code> code here </code>
to format your post; it's "PerlMonks-approved HTML":