|The stupid question is the question not asked|
Re^11: Thread terminating abnormally COND_SIGNAL(6)by BrowserUk (Pope)
|on Jul 18, 2013 at 20:04 UTC||Need Help??|
With those points in mind, do you still think multiple resource specific queues would be the best approach?
Yes. I'd tackle the 'which resource queue for non-specific jobs' using your option 1. Queue it (the jobid) to all applicable. And to deal with the run twice problem I have a status field in the object that the first resource worker that dqs it, changes from 'pending' to 'running'. Any subsequent worker dqing it sees the running and simply discards it and grabs the next one.
Multiple tokens means locking is required; and the workers need to lock the status before reading it to prevent two resources seeing pending simultaneously.
But if the node hashes (within %nodes) are also shared, then you only need lock the specified node, not the entire %nodes, so the lock time will be brief and contention low.
wouldn't checking the jobs be relatively quick? ... I just dont see how ... than dequeueing constantly
The problem is not (necessarily or only) that it would be slower. It is that busy loops sap cpu constantly redoing the same work over and over until something changes to allow it to move forward. (You've already seen this affect yourself as identified by your comment in your cut-down code:
With my alternative, the central dispatcher thread sits blocked (consuming no cpu at all) in a dq, until a new job arrives, and wakes instantly (subject to getting a timeslice, usually a few microseconds) when something does. And then all it has to do is reQ the token to the appropriate resource Q and goes back to blocking.
If the resource is available, it wakes up instantly to process it. And if not it will grab it as soon as its done with the preceding jobs.
Conversely, with your single queue-cum-array affair, when a resource comes free it has to wait for you to notice (query the DB), build your hash; and then scan through all the pending jobs (sod's law says it will always be the last one), and then reQ it.
If one of your popular resources gets a long job (or hangs or crashes) and you get a build up of a backlog for that resource, then jobs for every other resource even idle ones, have to wait while you scan over that backlog every time.
Think of it like this (sad analogy time:). Would you rather have a receptionist in your local hospital that:
Blocking is always better than polling. Its why we have whistles on kettles and bells on phones and front doors. If you had to keep picking up the receiver to see if anyone was calling, phones would never have caught on :)
Definitely an intriguing idea. I'll keep this in mind, ...
If you went for the reQ to multiple Qs idea above, the no locking idea is a non-starter.
A bit more grist for your mill :) Good lock.
With the rise and rise of 'Social' network sites: 'Computers are making people easier to use everyday'
Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
"Science is about questioning the status quo. Questioning authority".
In the absence of evidence, opinion is indistinguishable from prejudice.