|go ahead... be a heretic|
I am trying to further wrap my head around why saving a possible 30 seconds per device in this scenario was a less than optimal approach. That is other than the fact that it causes me a lot of synchronization issues.
Okay. Using your numbers: 100 machines; 3 commands; 15 seconds per command; and 10 concurrent threads.
But: I've spawned 100 threads and made 100 connections. No locking, nor waiting, nor syncing to slow things down.
You've spawned 300 threads and made 300 connections. And you had to acquire locks and wait for them.
Given the IO bound nature of the problem, the locking might not slow you down too much -- assuming that you can get it right without creating dead-locks; live locks or priority inversions et al. -- but you've definitely consumed 2 or 3 times as much cpu; caused 3 times as much network traffic; 3 times the load on the remote machines; and consumed more memory; to achieve the same overall elapsed time.
It just isn't worth the hassle.
With the rise and rise of 'Social' network sites: 'Computers are making people easier to use everyday'
Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
"Science is about questioning the status quo. Questioning authority".
In the absence of evidence, opinion is indistinguishable from prejudice.