|Welcome to the Monastery|
Re^8: Recursive locks:killer application. Do they have one? (How high?)by BrowserUk (Pope)
|on Feb 04, 2012 at 14:42 UTC||Need Help??|
I guess you can call that blocking locking. I call it a spin lock.
I take it that when you said "blocking lock", you really meant a 'waitable lock'.
Which are required for some uses, but not by any means all. And they suffer from the need for kernel involvement.
And it likely requires a bus lock which can be a bad choice if you want high concurrency.
How high is "high concurrency"?
Here are the results of runs ranging from 1 to 4096 contending threads performing 100 million additions between them:
I think that whether you consider clock time or cpu cycles, that scales pretty damn well, regardless of any bus locks involved.
For contrast, here are a couple of runs of the same code, same count, but using a waitable (OS) mutex:
Now given the shear scale of the lack of scalability here I hope you'll forgive me for not finishing that series. So to get an indication at higher contention levels, I'll drop the count by an order of magnitude:
Whilst the scaling doesn't look too bad -- at least once you get over the hump of having some contention -- it is certainly no better than the spinloop. And not only is it doing 1/10 the work, it is taking 10 times longer to do it.
But the real kicker is the cpu usage. Doing 1/10th the work using the waitable lock requires 3 times the clock cycles of the spinlock. So much for the efficiency of not spinning.
Ultimately, whether the spinlock provokes bus locks or not is irrelevant, because they are still vastly cheaper than ring 3-0-3 transitions.
I'll post the code for the above if asked.
With the rise and rise of 'Social' network sites: 'Computers are making people easier to use everyday'
Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
"Science is about questioning the status quo. Questioning authority".
In the absence of evidence, opinion is indistinguishable from prejudice.