Beefy Boxes and Bandwidth Generously Provided by pair Networks
Pathologically Eclectic Rubbish Lister
 
PerlMonks  

Re: Recursive locks:killer application. Do they have one? (mu)

by tye (Cardinal)
on Feb 02, 2012 at 21:28 UTC ( #951537=note: print w/ replies, xml ) Need Help??


in reply to Recursive locks:killer application. Do they have one?

I guess I'm in more of the opposite camp. The example problem given tells me that they are "doing it wrong", but not for using a re-entrant mutex. If you have a class where "A::foo() acquires the lock. It then calls B::bar()" then you are already holding the lock too long. The mutex being non-reentrant isn't going to point this out to you. B::bar() might decide to do something that blocks or that tries to acquire some other lock and then you've got lock-acquisition order to worry about which leads to deadlock problems.

I've seen tons of code that uses re-entrant mutexes and isn't "doing it wrong". That example is more like: you have a class that mostly just deals with the bits that need to be under a specific mutex. So the code to be run under the mutex is kept very small and cohesive by being its own class that just concentrates on doing the locking right.

And the re-entrant mutex comes in because you have methods that mostly can tell that they don't need to grab the mutex and so don't most of the time. So, since you are only grabbing the mutex in the rare cases when you need it, you can easily end up with a simple and clear utility method that might be called in a context where the mutex isn't held and also in contexts where the mutex is held and the utility function might (even indirectly) only rarely decide that it needs to hold the mutex.

You can get around that by splitting any such method into two methods, say doFooUnlocked() that just does the work and doFooLocked() that just holds the mutex and then calls doFooUnlocked(). Then doFooUnlocked() might be declared such that it can only be called from within the class. Then, if you already are holding the mutex, you need to call doFooUnlocked().

But, that solution requires the bifurcation of all methods that might call doFoo*() which can lead to quite a mess.

But this type of concern mostly only pops up when doing the style of threading + locking that Java pretty much encourages and I find that that is an approach that is just way too easy to end up becoming an unreliable mess after it tries to scale in the feature set supported. So I don't do anything like that these days.

I wish I had a much more concrete example handy but it has been too many years since I was doing that type of work (in C++).

- tye        


Comment on Re: Recursive locks:killer application. Do they have one? (mu)
Re^2: Recursive locks:killer application. Do they have one? (mu)
by BrowserUk (Pope) on Feb 02, 2012 at 22:32 UTC
    But this type of concern mostly only pops up when doing the style of threading + locking that Java pretty much encourages ...

    Pre-1.5 Java is certainly the poster bot for recursive locks -- synchronized blocks -- which also makes it the poster bot for all that is wrong with them.

    Hence the 1.5 moves to add finer grained locks. Though I think they went too far the other way with the need to explicitly unlock.

    I like perl's current mechanism -- locking data rather than code blocks -- combined with it's semantics -- automatic unlocking at the end of the encompassing block. What I dislike is the overhead of current implementation with its need to count and no timeout.

    Then, if you already are holding the mutex, you need to call doFooUnlocked(). But, that solution requires the bifurcation of all methods that might call doFoo*() which can lead to quite a mess.... I wish I had a much more concrete example handy.

    Ditto the last part. Whilst appreciating that your example is abstract, it doesn't sound right to me.

    IME, the whole idea of needing to retain a lock long enough to call out to another method just seems wrong to me. If what you need to do with the thing you are protecting is sufficiently complex to warrant calling a subroutine to do it, then your design strategy is all wrong. Like declaring variables, locks should be taken as late a possible and in as small a scope as possible. And that means they should not be held across function/method call boundaries.

    I'm still open to the idea that there exist algorithms that require recursive locks, but until I find a concrete example that I can't re-write to not use them -- without having to jump through hoops to do so -- I'm pretty settled on the notion that they should only be used as a last resort rather than a first.


    For the most part, I think most of the bad press surrounding locking comes simply from bad programming. And most of that comes down to the belt & braces conservatism of applying locks to everything; too often; and fundamentally, for too long.

    There is also a lot of bad academic research floating around. So called 'classical concurrency problems'. Things like this. There are reams and reams of academic treatise attempting to come with provably correct algorithms to deal with this, but not one of them suggests the obvious solutions. Buy five more forks. Or, eat with their fingers.

    By far the easiest way of avoiding locking problems, is to avoid locking. Not always possible, but (I'm pretty sure) it is always possible to confine locking to very short pieces of code in very small scopes.

    But the proof will be in the counter example.


    With the rise and rise of 'Social' network sites: 'Computers are making people easier to use everyday'
    Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
    "Science is about questioning the status quo. Questioning authority".
    In the absence of evidence, opinion is indistinguishable from prejudice.

    The start of some sanity?

      What I dislike is the overhead of current implementation with its need to count and no timeout.

      I find it hard to imagine how "need to count" can have more than the most trivial of impacts on the efficiency of a mutex.

      Ah, Java switching locking schemes explains why this is such a political football, then.

      I agree with many of your points above.

      IME, the whole idea of needing to retain a lock long enough to call out to another method just seems wrong to me.

      True. But the methods I'm talking about were tiny bits of utility code. Think something like "length" or "isReserved".

      The only reason the locking would get that complicated for these things was due to being careful to only lock when locking was required. So, a huge fraction of invocations of some method would never even need to lock. When moving the "window" where the lock was held to the smallest possible scopes, those scopes would fairly often move down inside some internal utility method. This was C++ so there was more call for tiny utility methods compared to writing in Perl.

      Something like a "move" operation wouldn't have to lock unless either the source or destination was "shared". And a "clear" operation would boil down to a bunch of "move" operations with no outer lock while a "shutdown" operation would lock and then "clear".

      But these days I don't program by writing a class and then trying to insert the locks where required so that the class becomes "thread safe". I design the system to not need locks except the minimal number of key places. It is closer to "multiple processes" coding over "multiple threads" coding.

      So, instead of some object that might be shared between threads, I'd have a mechanism for transferring responsibilities between threads that would transfer simple data and end up with either two similar, separate objects or one object being destroyed and another created.

      So, when I try to put my "multiple threads" programming hat back on, I would want re-entrant mutexes (and requiring an explicit unlock sounds like a really horrid idea). But, stepping back, I'd rather just not go back to that way of thinking and instead do design that can be implemented with "multiple processes" even if the expected implementation is "multiple threads", and that makes "re-entrant or not" mostly a moot question.

      - tye        

        I find it hard to imagine how "need to count" can have more than the most trivial of impacts on the efficiency of a mutex.

        See for yourself. It's not just time but also space efficiency.

        Here is perl's current implementation of recursive locking

        typedef struct { perl_mutex mutex; PerlInterpreter *owner; I32 locks; perl_cond cond; } recursive_lock_t; void recursive_lock_acquire(pTHX_ recursive_lock_t *lock, char *file, int l +ine) { assert(aTHX); MUTEX_LOCK(&lock->mutex); if (lock->owner == aTHX) { lock->locks++; } else { while (lock->owner) { COND_WAIT(&lock->cond,&lock->mutex); } lock->locks = 1; lock->owner = aTHX; } MUTEX_UNLOCK(&lock->mutex); SAVEDESTRUCTOR_X(recursive_lock_release,lock); }

        And that lot -- a mutex and owner, a locks count and a condition variable is built on top of this lot:

        115: typedef union 116: { 117: struct 118: { 119: int __lock; 120: unsigned int __futex; 121: __extension__ unsigned long long int __total_seq; 122: __extension__ unsigned long long int __wakeup_seq; 123: __extension__ unsigned long long int __woken_seq; 124: void *__mutex; 125: unsigned int __nwaiters; 126: unsigned int __broadcast_seq; 127: } __data; 128: char __size[__SIZEOF_PTHREAD_COND_T]; 129: __extension__ long long int __align; 130: } pthread_cond_t;

        And this:

        76: typedef union 77: { 78: struct __pthread_mutex_s 79: { 80: int __lock; 81: unsigned int __count; 82: int __owner; 83: #if __WORDSIZE == 64 84: unsigned int __nusers; 85: #endif 86: /* KIND must stay at this position in the structure to maintai +n 87: binary compatibility. */ 88: int __kind; 89: #if __WORDSIZE == 64 90: int __spins; 91: __pthread_list_t __list; 92: # define __PTHREAD_MUTEX_HAVE_PREV 1 93: #else 94: unsigned int __nusers; 95: __extension__ union 96: { 97: int __spins; 98: __pthread_slist_t __list; 99: }; 100: #endif 101: } __data; 102: char __size[__SIZEOF_PTHREAD_MUTEX_T]; 103: long int __align; 104: } pthread_mutex_t; 105:

        Which, when you realise that a non-recursive lock can be built atop a single bit, starts to look just a little indulgent.


        With the rise and rise of 'Social' network sites: 'Computers are making people easier to use everyday'
        Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
        "Science is about questioning the status quo. Questioning authority".
        In the absence of evidence, opinion is indistinguishable from prejudice.

        The start of some sanity?

        I find it hard to imagine how "need to count" can have more than the most trivial of impacts on the efficiency of a mutex.

        Independent proof Suck it!

        You are almost as bad as sundialsvc4; unfortunately, getting enough monks around here to recognise it is going to be an awful lot harder.


        With the rise and rise of 'Social' network sites: 'Computers are making people easier to use everyday'
        Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
        "Science is about questioning the status quo. Questioning authority".
        In the absence of evidence, opinion is indistinguishable from prejudice.

Log In?
Username:
Password:

What's my password?
Create A New User
Node Status?
node history
Node Type: note [id://951537]
help
Chatterbox?
and the web crawler heard nothing...

How do I use this? | Other CB clients
Other Users?
Others exploiting the Monastery: (4)
As of 2014-10-25 06:21 GMT
Sections?
Information?
Find Nodes?
Leftovers?
    Voting Booth?

    For retirement, I am banking on:










    Results (142 votes), past polls