Beefy Boxes and Bandwidth Generously Provided by pair Networks
Perl: the Markov chain saw
 
PerlMonks  

Re: Multi-core and the future

by vrk (Chaplain)
on Sep 02, 2008 at 08:26 UTC ( [id://708418]=note: print w/replies, xml ) Need Help??


in reply to Multi-core and the future

The multicore boom is just as silly as the mega and gigahertz race. No-one really needs that much computing power. When I say no-one, I really mean it: you think you do, but in reality you could do everything you do now with machines from ten years ago. The obsession to produce faster and leaner computing units yearly, quarterly, and even monthly is sometimes fun to watch, but it makes you think if there wasn't a better target for all that intellectual and financial effort.

Rambling aside, my firm belief is that parallelism implementations and techniques should be invisible to the user of the programming language or library. This isn't to say they shouldn't be available; on the contrary. Having trivial to use implementations that prevent you from programming race conditions or deadlocks is crucial.

Consider sorting in standard libraries of any programming language. Most of the time you use the standard mergesort or quicksort. You don't need to know how the implementation was done; you just feed in an array and out pops a correctly sorted one. Parallelism is a harder problem than sorting, so the interface can likely never be this simple, but ideally all you would need to do is define which pieces of code may be run in parallel, and the language implementation would do the rest.

The obvious benefit is that the programmer can then make less of a mess of it. Parallelism is hard in the general case, but there are many good solutions to the problem. There is no reason why you should have to manage threads or mutexes yourself, unless you are writing the library code. We already have automatic memory management; we should have automatic parallelism.

Note that the above remark is not condescending. It is just wasted effort and time when you insist on doing something manually that could and should be automated. No offense to C programmers either!

Perl 6 comes very close to the ideal, I think. It might develop even closer. Hyperoperators and junctions are an excellent start, though I haven't seen any documentation or planning how they will work with side effects (for example, two functions &foo and &bar assigning to the same variable). Obviously it was meant to be used in SIMD and MIMD operations, not like this, so there is still a long way to go. Quite a lot of research has gone into parallelism (sometimes called multiprogramming, which has a nice sound), and threads or mutexes or monitors are not the only options. The problem is, as always, finding a good compromise.

Since Perl 6 is not side-effect free (unlike, say, Haskell, unless you do I/O), it won't be as convenient to describe to the compiler which code blocks depend on each other and which ones don't -- though there may be a way through analysing lexical variables. In any case, in the first version of Perl 6 parallelism and concurrency won't be revolutionary.

--
print "Just Another Perl Adept\n";

Replies are listed 'Best First'.
Re^2: Multi-core and the future
by willyyam (Priest) on Sep 02, 2008 at 19:45 UTC
    The multicore boom is just as silly as the mega and gigahertz race. No-one really needs that much computing power. When I say no-one, I really mean it: you think you do, but in reality you could do everything you do now with machines from ten years ago.

    I must respectfully disagree. This is true for many, but I am running microsimulations that I need to parallelize across ten multi-processor 4Ghz machines so that we get answers by the end of the weekend, and this was not possible with the commodity machines of ten years ago.

    I think the web-browsing, email reading, word processing spreadsheet viewing masses only need such massive computing power because of OS bloat and a demand for pretty colours, but I need this much computing power to do my work.

Re^2: Multi-core and the future
by Anonymous Monk on Sep 03, 2008 at 02:45 UTC
    The multicore boom is just as silly as the mega and gigahertz race. No-one really needs that much computing power. When I say no-one, I really mean it: you think you do, but in reality you could do everything you do now with machines from ten years ago.

    Speak for yourself. We routinely process jobs that take many days to run. They are already distributed to multiple dual-core worker machines. We would greatly benefit from much more powerful machines (for the money) with more cores per box. More cores per box would help us somewhat more than more boxes for the same cost.

    I don't think our company is anything special. We're not some fancy schmancy research lab or something. Just a company that chugs through lots of data in our daily course of business.

Log In?
Username:
Password:

What's my password?
Create A New User
Domain Nodelet?
Node Status?
node history
Node Type: note [id://708418]
help
Chatterbox?
and the web crawler heard nothing...

How do I use this?Last hourOther CB clients
Other Users?
Others sharing their wisdom with the Monastery: (7)
As of 2024-03-28 21:37 GMT
Sections?
Information?
Find Nodes?
Leftovers?
    Voting Booth?

    No recent polls found