An interesting question. My interpretation of the requirement was that “at the end of the day, we want 10 servers to have updated themselves,” and so, my thought is to: first, immediately fire a command to every one of them at once, asking every one of them at once to cvs update themselves and then to send back a notification, of success or of failure, by some appropriate means. The script that issued all of the simultaneous commands, without having waited for any of them to complete, then simply waits for 10 final-status messages to arrive. You’re just waiting for your proverbial mailbox to fill up, and heck, maybe you literally use e-mail to do it.
Obviously, this approach would create quite a load (so to speak...) on the internal network as every one of the servers attempted to do a checkout at precisely the same time. But that “quite a load” might actually be quite reasonable.
Now, having said that, yes this is clearly also a task that could be handled by forking a bunch of shell scripts ... you could even literally do the job in the shell using facilities like '&' on the command-line. Because the various forked processes are just issuing a command and then loafing off, sipping mint juleps while waiting for the remote machine to do its work. There is, as they say, TMTOWTDI™ in this case, all of them rather uncomplicated.
No matter how you decide to tackle it, if the approach you are considering feels complicated (not just “unfamiliar”), then there is probably an easier way to do it... that ought to be the litmus-test.