http://www.perlmonks.org?node_id=580193


in reply to Re^5: Parrot, threads & fears for the future.
in thread Parrot, threads & fears for the future.

How about we have a bet on whether clusters are going away?

I didn't say or imply that "clusters were going away".

Only that the bar at which clusters need to be resorted to will be raised. That those currently having to use the smaller (4- to 32-way) clusters to get their work done will soon no longer need to deal with the latency, restricted bandwidth and topological problems involved with clusters, nor the complexity and expense of cluster-based software solutions, because they'll be able to use simpler, cheaper, cluster in a box solutions.

Google, arguably the biggest users of clusters, pretty certainly commercially, also use commodity hardware. Will google still be using clusters in 2010? Of course. But what say you that:

Will google move to using threads? Consider the possibilities.

For each given MapReduce task, they currently deploy M map tasks and R reduce tasks (where R is usually some multiple of M), that each live on different machine within a (~2000) machine cluster. The intermediate outputs from the M map tasks, are written/replicated to the local disks of two or three chunk servers within the same cluster. Each of reduce tasks then reads this intermediate results from one or other of those chunk servers, processes it and writes/replicates it's results two or three other chunk servers.

Now, imagine if each group of 1 map task + N reduce tasks all ran within the same machine? Instead of each piece of intermediate data making 6 network transports, those reads and writes can benefit from the localisation optimisation that Google already use. That reduces bandwidth consumption immediately. And by quite a large factor.

Now further imagine that instead of 1 Map task and N Reduce tasks per cluster reading and writing to the local hard disk. You instead deploy 1 Map thread and N Reduce threads per cluster. Now, there is no need for the intermediate data to leave ram.

You've gone from 6 cross network transfers for each piece of intermediate data, to 1 read and 1 write from local memory. How would that affect performance?

And another big argument against multi-threading is that it is hard to do. We have enough trouble finding people who can program semi-competently.

I really did lose you right at the top of the OP didn't I? Had you read on, you would have realised that about 70% of my post was spent stating the difficulties (in rather more detail), that currently prevent threaded code being written and deployed. It then went on to suggest that there is a solution, but since you're dead set against threading, I won't bore you further by repeating it here.

A final note. Computing did not begin or end with the PC.

I'm well aware of that. I've lived and worked through it. My first programs were written to run on a Dec10 running Tops. My college code ran mostly on a PDP11/45. My first database project was on clustered (twinned) pdp11/60s. The first commercial project I independently architected ran on a BBC micro using 6502 machine code. My first interpreted language was REXX running under CMS over VM 370/XA on a an IBM mainframe. Fully half my experience is writing and architecting software that run on machine other than PCs, from embedded systems on microcomputers; to database work on minis; to Big Stuff on Big Iron.

From e-commerce (when it was still called EDP); through scientific work using images to visualise huge quantities of data; through database work deploying and retrieving literally millions of paper (OMR) university examination entrance & examination papers trans-nationally across the breadth of 6 entire West African countries (3 jumbo jets full of paper in either direction) processing and collating the information into another Jumbo jet full of paper reports in 3 weeks. And much more.

You (and merlyn) rail on about your respective depths of experience, but from my perspective, based upon the experience that you have outlined here, you both have less years than me; and far narrower band commercial experience. So please, stop trying to 'put me in my place' with your knowledge and depth of experience.

But just for grins, even the latest supercomputers are PCs. At least in name :)


Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
Lingua non convalesco, consenesco et abolesco. -- Rule 1 has a caveat! -- Who broke the cabal?
"Science is about questioning the status quo. Questioning authority".
In the absence of evidence, opinion is indistinguishable from prejudice.
  • Comment on Re^6: Parrot, threads & fears for the future.

Replies are listed 'Best First'.
Re^7: Parrot, threads & fears for the future.
by tilly (Archbishop) on Oct 25, 2006 at 00:46 UTC
    I still strongly disagree.

    For instance you say that, That those currently having to use the smaller (4- to 32-way) clusters to get their work done will soon no longer need to deal with the latency, restricted bandwidth and topological problems involved with clusters, nor the complexity and expense of cluster-based software solutions, because they'll be able to use simpler, cheaper, cluster in a box solutions. And thereby miss the huge point that people use clusters to achieve a combination of performance and reliability.

    Take, for instance, the lowly webserver cluster. Even if you're serving a million dynamic pages per day, your webserver requirements are pretty modest. 2, maybe 3 decent pizza boxes covers it. However you want more machines than that. Maybe you have 5 or so. Why the extra machines? Because if something goes wrong with one of them, you can just pull it out of service and work on it. You want both performance and reliability. And reliability is achieved through having redundant hardware.

    Replacing a small cluster like this with one machine would be a stupid idea unless that one machine was engineered for an insane degree of reliability. At which point it is going to cost so much more that you'd be dumb to choose that route. (Unless the machine was already available for some other reason...)

    Because of this fact I guarantee that small clusters of machines are not going away any time soon. Moore's Law may deliver machines whose performance exceeds the need, but the needs of commodity users guarantees that there is no pressure to make those machines reliable enough for many business uses. And businesses will achieve that reliability through clustering.

    About what Google will and will not do, this will make you happy. Their request of chip designers (which looks like it is being paid attention to) is more multi-threading on chip, and less speculation in execution plans. The reason for this is that any work that computers do takes electricity. Therefore if chips do a lot of code analysis and speculative out-of-order operations, they use more electricity per completed instruction than they do if they have multiple parallel executing threads. Given that declining hardware costs make electricity an ever bigger component of total cost, multi-threaded cores are more power efficient than single-threaded ones.

    That said, Google does not have people do a lot of multi-threaded programming. Their approach consists of having people doing a lot of single-threaded programming and then putting them together in easily parallelizable chunks. Yes, I'm aware that this is similar to your proposal for how to actually write multi-threaded code. Yes, I'm aware that pugs is implementing something like this. However my impression remains that Perl is a bad language to try to do this with simply because it is so darned hard to prove that arbitrary code won't have side-effects and therefore can be parallelized.

    I could be wrong about that, and I'll applaud loudly if people prove me wrong by succeeding very well at it.

    (Random note. Another kind of "easily parallizable chunk" is a piece of SQL. Databases have moved towards doing parallel query execution, and will move farther that way as time goes on. Again the programmer just writes single-threaded code. But behind the scenes it is parallelized. However in this case the programming language does none of the work.)

    A final note about personal experience. I only bring up mine when I think it really is relevant to the point at hand. Which has nothing to do with breadth, and a lot to do with specifics. You definitely have been in computers longer than I have, and it sounds like you've done more kinds of things with them than I have. But that won't change whether or not my experience has any bearing on a specific example at hand. And if it does, I'll mention it. And if it doesn't, I won't.

    You'll note that I wasn't the one who brought my experience up in this thread. You were. And you brought up a whole ton of experience and then said nothing relevant to the point that you were talking to. Which was my claim that even if the PC heads towards multi-threaded implementations, that doesn't mean that programming is all headed towards being multi-threaded.If you think I was trying to "put you in your place" based on my depth of experience, then I'd suggest re-reading the thread. You made a claim that I thought was silly, and I responded honestly to it. (For the record, I think that any claim of the form, The future of programming is X is silly. Programming is going in too many directions at once to be so simply characterized.)

    You'll also note that you didn't say that your wide experience contradicts the point that I made about different kinds of computing devices being out there. Which is that even if the PC is headed towards having many parallel threads of execution, that doesn't indicate that one can simply say that threading is the future of programming.

      that doesn't mean that programming is all headed towards being multi-threaded.

      Straw man.

      Update:

      I apologise for this post.

      It was stupid and crass and exactly what I've taken others to task for doing in the past. (Ie. Picking out one element and using it dismiss the entire post).

      I am sorry and will follow up in detail if anyone is interested?


      Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
      Lingua non convalesco, consenesco et abolesco. -- Rule 1 has a caveat! -- Who broke the cabal?
      "Science is about questioning the status quo. Questioning authority".
      In the absence of evidence, opinion is indistinguishable from prejudice.
        You could follow up in detail just there instead of apologizing.

        I'm always interested in good posts, specially in those debating controversial matters with a low flame/info ratio. Since you shurely don't expect exhortation from each of us monks I boldly stand up and shout "Yes, We Are!". From the votes cast upon this node I'll deduce whether that has been a good idea, and they might also answer your question... ;-)

        --shmem

        _($_=" "x(1<<5)."?\n".q·/)Oo.  G°\        /
                                      /\_¯/(q    /
        ----------------------------  \__(m.====·.(_("always off the crowd"))."·
        ");sub _{s./.($e="'Itrs `mnsgdq Gdbj O`qkdq")=~y/"-y/#-z/;$e.e && print}