|The stupid question is the question not asked|
I question your belief that concurrency implies the success of a language.
That's not what I said. A better paraphrase would be: a lack of concurrency in a new language, at this point in time, will severally imped its chances of successful, widespread take up.
But note. I said "concurrency" not "threads". I'll come back to that.
There is a well-known myth ... that threads are the worst possible thing, however that is because Perl programmers which deal mostly with web programming don't study threads well enough so of course they cannot use them. As a retreat, they use POE instead which is not real concurrency. They hide behind excuses of the form "threads are too complicated for the human brain".
It's quite easy to demonstrate that Perl5 threads are far easier to use, for almost everything, than POE.
It's also possible to demonstrate that for many things, POE is more efficient than (current) Perl5 threads. And with rcaputo one of the most talented and productive Perl programmers around, seemingly able to produce a module for any occasion, users don't have to learn POE core, just work out which 10 modules they need to string together.
For Perl5 web programmers, threads quite possibly are the worst possible thing. The environments their code operates in--pre-forking and pre-threading http servers with mod_perl or FastCGI--the added complications of Perl-internal threading is quite daunting. And for the most part, they are not required in that environment. Each invocation of a script in an http environment is most often a single flow. The most requested use of concurrency in a web server environment, is the hoary ol' chestnut of the desire to keep the client informed of progress of a long running process. And threading inside a perlcgi script is not the best way of tackling that.
But those same programmers are quite likely to not find much utility in many, (maybe even most), of the other major new features of Perl6. And perhaps that wrings out the cause of the controversy. They don't want Perl6 because they are not interested in learning the new features. What they want is Ruby on Rails that uses Perl5 syntax, because it sounds really cool and looks really great to use...but they don't want to learn Ruby syntax either.
I'll stop there before I offend all the Perl Web coders. I know virtually nothing about web programming in Perl. But I suspect that Mr Wall's ambitions for Perl6 go somewhat beyond grabbing a bit of data from a DB, interpolating it into some text and throwing it at STDOUT.
But let's return to Perl 6, Parrot does not support concurrency. Let us suppose that concurrency will be implemented. Who would use it ? Since most Perl programmers already have the prejudice that threads are evil and bad.
There are a lot of scientific users--genomicists, physicists et al--that are screaming out for simple, effective ways of utilising the full potential of their multi-cored hardware. AMD are now selling four socket motherboards with 48 cores. Utilised properly, $16 grand gets you a box that will do a year's work in a week; a week's work in a morning; and morning's work in 5 minutes. And the first language that gives non-expert programmers, simple, direct access to utilising that power, will be a winner.
And simple access means:
Whilst (Perl5) threads have a bad rep, that is mostly an inevitable fact of their birth. Retro-fitting them to a mature and highly non-reentrant interpreter is heroic, but was always going to take a few iterations to get right. It now mostly is. There are still some artifacts of the origins--the attempt to provide a fork facilty on windows; which is mostly still a failure.
As a result of those origins, the attempt to copy the fork way of working by providing the spawned thread with a duplicate of the spawning threads environment, Perl's threads are still far too heavy. And the cloning that causes that is mostly a waste. Windows simply doesn't support enough of the *nix mechanisms to make fork emulation sucessful. For example: About 50% of forks are done so as to be followed by exec. But that doesn't work on windows because there is simply no mechanism for replacing the contents of an existing process. So duplicating an entire 'process' as a thread, just so that it can wait on another real process, doesn't make sense. Ditto for signals. Ditto for piped processing. etc.
But, the basic mechanism that isolates one thread's data from another's is good. So good in fact, that it can form the basis of a very effective and intuative concurrency mechanism. Effectively, if you take a non-threaded build of Perl and start an interpreter in two (or more) threads, you have two isolated perl's running completely independant of each other. Of course, that falls down when either attempts to access process global state--filehandles and the like. And, they do not have a way to share data.
Now, shared data is, by definition, global. So we have process global state--filehandles et al.--and thread shared state. If you simply combine those two, and provide locking around access, you end up with a very intuative and functional basis for threading. To clarify: my variables can never be shared. our variables can!
No need to lock any thread local lexicals (my), so we grab back some performance lost when threads were added. Global state and shared state, share an arena independant of all threads, and access is controlled (semaphored internally) as is now the case for all data. User locking (or some other more advanced user controlled access mechanism to shared state) is layered on top.
Lay that into a new development that has been primed for concurrency, of some form, from the get go--ie. careful attention paid to reentrancy--and you have a relatively simple mechanism for effective, efficient, lightweight threading.
Of course, there is the potential for layering other mechanisms on top to simplify the user view of the shared state. Promises for instance. Or STM, if that can be made to work in an interpreted environment with fat data entities. Or channels al la Go. Or whatever. But all of those can be after-the-fact, add-ons. The POE::Wheel::* to POE; or Parallel::ForkManager to fork.
And once you have the ability to run concurrent, independant, preempted interpreters, it becomes relatively easy to start an Event-driven thread for asynchronous IO in one of them. And perhaps run a user-space, cooperative scheduler in another. But you need the ability to start an independant interpreter, in a (kernel) thread, first.
And finally, on platforms where fork is native, it is a no-brainer to give access to it. But don't bother to try and emulate it on Windows. It isn't worth the hassle.
I think you should not assume oppinions of people who evangelize a language or the other. Instead you should judge them with your own mind and see for yourself if they have been adopted or not.
If you are a regular around here, then you'll know that I never take other people's opinions as fact, and always reach my own conclusions on anything that I bother to express an opinion on.
Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
"Science is about questioning the status quo. Questioning authority".
In the absence of evidence, opinion is indistinguishable from prejudice.