Beefy Boxes and Bandwidth Generously Provided by pair Networks
Your skill will accomplish
what the force of many cannot
 
PerlMonks  

Re^3: [OT]: threading recursive subroutines.

by Corion (Pope)
on Feb 03, 2011 at 09:43 UTC ( #885939=note: print w/ replies, xml ) Need Help??


in reply to Re^2: [OT]: threading recursive subroutines.
in thread [OT]: threading recursive subroutines.

One idea that I find interesting in practice (but not from an algorithmical point of view) is Grand Central Dispatch, basically the idea of having a worker pool of threads (likely about 2x cores) that get managed by the OS. This approach seems to take the burden of thread administration away from the program and stuffs it into a centralized infrastructure that makes sure that no more than a set limit of threads is running. It seems that Grand Central Dispatch suffers from different-but-still-ugly locking problems, but that's expected.

I think that the idea of "futures" has merit - AnyEvent uses something like them in its AnyEvent->condvar mechanism. AnyEvent has at least some way of detecting deadlocks, but as it is basically single-threaded, detecting deadlocks is easier there I presume. Having transparent "futures" in Perl is feasible through trickery with transparent proxy objects like you present or Win32::OLE and MozRepl::RemoteObject implement in a more general fashion. I think that the performance hit you incur by heavily using tied variables will again negate many of the speed gains you get. I'm also not sure whether having transparent futures will actually help much, because you still need to make really sure that you don't immediately fetch the results after starting a calculation.

If there is a way to detect the dependency chain of futures ahead of time, implementing a thread pool together with the futures could limit the amount of threads spawned with your implementation. But I'm not sure whether that's possible as each future could spawn another future it depends on and thus quickly overrun each fixed limit. But maybe using Coro together with threads could allow switching the context without starting a fresh thread once we run out of CPU threads. But mixing Coro and threads is something that's very unlikely to work...

One of my "easier" thought experiments is the idea of switching Perl to asynchronous IO. I think this is somewhat easier to analyze, as IO iterators are used more often than threads are used, and their use does not require deep understanding and (dead-)locking operations are unnecessary. For example, a "lazy file" could work with your/a "future" implementation by pretending to have read a line from a file, until somebody looks at the value:

open_async my $fh, '/etc/passwd' or die "Couldn't read asynchronously from '/etc/passwd': $!"; while (<$fh>) { # $_ is a tied variable that will be filled in a background thread # or through asynchronous IO next unless /root/; };

Of course, most (existing) Perl code will simply not benefit from asynchronous IO, as most code will read a line from a file and immediately inspect each value. This will simply negate all benefits we gain from freeing up Perl from waiting for the line to be fetched from the file. Maybe we could gain something by requesting more than the one line (or more than the one buffer) to be read from the file, but that will likely get problematic if we try to use tell and other methods for looking at files.

My conclusion is that implicit and transparent parallelism will likely simply not work because the existing procedural programs will not make use of it. So in any case, specialization from the straightforward approach towards a more parallel approach is necessary (for Perl programs). The likely "easy" solution are callback-oriented or continuation-passing models using closures like AnyEvent or Node.js. There you start the asynchronous call and give it a callback to execute once the operation completes.


Comment on Re^3: [OT]: threading recursive subroutines.
Select or Download Code
Re^4: [OT]: threading recursive subroutines.
by BrowserUk (Pope) on Feb 03, 2011 at 10:30 UTC

    I did say up front that my interest in this is not directly related to Perl. Leastwise not as it stands with Perl 5 and iThreads. The overheads of the Perl5 function calls (including ties), combined with those of iThreads, make this a non-starter as a realistically usable bolt-on to Perl 5.

    However, I do have notions and bits of code for a 64-bit only interpreter that demonstrates highly efficient function & method call performance that would, if I ever get around to implementing it, allow for transparent parallisation.

    The concept is not dissimilar to GCD or the Erlang SMP VM.

    In effect, every function/method call is actually queued (with is arguments as closures), to a central queue, rather than executed immediately, and immediately returns a future.

    One (or more) interpreters per core are instantiated at start-up, and they loop over the central queue executing the coderefs with closures in turn as they come off the queue.

    The futures contain a monotonically increasing 'sequence number'. The queue is effectively prioritised according to these sequence numbers.

    Everything--numbers, strings, code-blocks (functions and methods, but also the bodies of if statements etc.)--are object references. Objects carry being-read and being-written flags.

    Any method requiring write access to an object will be requeued if that object is being read or written. Any method requiring read access will be requeued if the object is being written.

    The problem child is currently recursion.

    There is much that is yet to be explored.


    Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
    "Science is about questioning the status quo. Questioning authority".
    In the absence of evidence, opinion is indistinguishable from prejudice.

      I think that this trampoline approach makes sense and would well mesh with asynchronous IO and "green" (userspace) threads as Coro implements them:

      Once your central code dispatcher (GCD) runs out of idle CPU cores/worker threads, it doesn't dispatch function calls by launching a new native thread but suspends the currently executing context and switches to another context using the method that Coro employs, switching out the (relevant part of the) C stack for another. This would basically be transparent to the main code, except for the drawback that parallel execution means more race conditions. When using Coro alone, you don't get nasty race conditions, because there is no parallel execution of Perl code. But you're not worse off than with plain threads.

      The small problem that remains is that threads and Coro are unlikely to mix well. Coro itself claims that it is not iThread-safe, and I have little doubt that this is a false claim. and I have no reason to believe otherwise. I presume one of the more interesting problems will be how to move one coroutine context (basically a copy of the C or Perl call stack) across thread stack boundaries without messing up too many things. But maybe reusing the ideas (and clever macros, and development research) of Coro gives enough foothold to implement transparent switching between green and native threads.

      Update: Clarified sentence about my impression how badly Coro and threads interact.

Log In?
Username:
Password:

What's my password?
Create A New User
Node Status?
node history
Node Type: note [id://885939]
help
Chatterbox?
and the web crawler heard nothing...

How do I use this? | Other CB clients
Other Users?
Others exploiting the Monastery: (3)
As of 2014-07-24 02:52 GMT
Sections?
Information?
Find Nodes?
Leftovers?
    Voting Booth?

    My favorite superfluous repetitious redundant duplicative phrase is:









    Results (156 votes), past polls