Beefy Boxes and Bandwidth Generously Provided by pair Networks
Perl: the Markov chain saw

AnyEvent: How to protect critical sections?

by saintmike (Vicar)
on May 17, 2011 at 18:16 UTC ( #905330=perlquestion: print w/replies, xml ) Need Help??

saintmike has asked for the wisdom of the Perl Monks concerning the following question:

What's the best way in AnyEvent to protect critical sections? For example, if I have a function critical() that I don't want to be interrupted by a call to interrupt() in the following code, what kind of mutex-like AnyEvent gimmick would I use?
use AnyEvent::Strict; use AnyEvent; my $w = AnyEvent->condvar(); my $critical_timer; my $interrupt_timer; my $in_critical = 0; critical(); interrupt(); $w->recv(); sub critical { print "critical start\n"; $in_critical = 1; $critical_timer = AnyEvent->timer ( after => 1, cb => sub { print "critical done\n"; $in_critical = 0; critical(); }); } sub interrupt { print "interrupt start\n"; if( $in_critical ) { print "D'oh!!! interrupt while in_critical\n"; } $interrupt_timer = AnyEvent->timer ( after => 1, cb => sub { print "interrupt done\n"; interrupt(); }); }

Replies are listed 'Best First'.
Re: AnyEvent: How to protect critical sections?
by Corion (Patriarch) on May 17, 2011 at 20:25 UTC

    If you have code you don't want interrupted, don't yield to AnyEvent functions, it's that simple.

    AnyEvent does not run anything in parallel, so as long as you are running your own Perl code, and not calling ->recv, ->send or ->timer, the AnyEvent loop won't run and your code will stay in the critical section uninterrupted.

      I understand, but the problem is that I have two competing tasks: One is the timer firing, the other is a http request callback having data available.

      I simply don't want the timer callback to be executed while the http request is under way, but *right after*.

      I guess I could just restart the timer and try to execute the interrupt later, but that's a) non-deterministic because it could fail again and b) not very efficient because I'd like to run it as soon as the http request has been completed.

        Then I would cancel the timer when starting the HTTP request, and relaunch the timer in the HTTP request on_body callback (and likely in the on_error callback too). You can "immediately" launch a timer by launching it with after => 0 - this will launch the timer callback the next time the event loop is entered.

      (i would have liked to reply to the original posting, but there is no reply button, apparently). I don't know what the "bets" way is, but one can simply use a global variable. Simple case
      { local $ignore_interrupt = 1; ... } ... return if $ignore_interrupt;
      That loses "interrupt" events, which might not be acceptable. A solution for that is to store some marker in the variable.
      { local $ignore_interrupt = 1; ... do_interrupt if $ignore_interrupt == 2; } ... if ($ignore_interrupt) { $ignore_interrupt = 2; } else { do_interrupt; }
      The same kind of code also works in Coro. However, I firmly believe that youc an always find a simpler way - e.g. by stopping and restarting the timer or similar code. Trying to recursive into the event loop somehow creates modality, which is kind of evil and tends to lead to more problems than it solves.
Re: AnyEvent: How to protect critical sections?
by BrowserUk (Patriarch) on May 17, 2011 at 22:25 UTC

    Having read and re-read the to and fro between you and Corion, for all the world it sounds like what you really need is real threading.

      He might not want the extra dependency on Coro, and the windows emulation code in perl (the other "threads") is of little help in this case-

        It would seem to me that threads would "perfectly" do what's needed, by having the "timer" thread call sleep and the other thread do the HTTP fetching. Then, the critical HTTP fetch part will need to be protected by a critical section, for example a Thread::Semaphore, or by having the timer queue a request through a Thread::Queue. Depending on the nature of the problem, saintmike might want to avoid queueing more requests while one request (to the same resource) is already in progress.

        Of course, real (OS) threads bring more concurrency problems than Coro brings. I'm not sure which bring be less problems, worrying about only ever calling ->recv in one place and keeping track of what timers to restart, or worrying about locking the proper sections of the code to make them single threaded or globally locked. Both are treatable problems, and neither seems like a clear winner over the other in this scenario.

Log In?

What's my password?
Create A New User
Domain Nodelet?
Node Status?
node history
Node Type: perlquestion [id://905330]
Approved by ikegami
and the web crawler heard nothing...

How do I use this? | Other CB clients
Other Users?
Others rifling through the Monastery: (1)
As of 2022-10-01 09:21 GMT
Find Nodes?
    Voting Booth?
    I prefer my indexes to start at:

    Results (126 votes). Check out past polls.