hope you don't have to work with a database that contains tables for holding timestamped transactions, especially if there's any indexed columns...as anything beyond several thousand such records bogs down both indexing during insert/update and a view/query takes forever. i've seen several systems/businesses that end up struggling with this at some stage, as it doesn't scale well. on the other side of the spectrum, text based transaction logging scales linearly, and runtimes for batch processing of such transaction logs are predictable. In any case, would it work better for you if instead of trying to control several long running background queries, if the queries became suitably controlled batch processes with less direct control by a foreground controller? in other words either just about entirely decoupled OR at least using a queue/execute/review mechanism that doesn't need so much direct control of the queries.
the reason for this sort of advice instead of just code to answer exactly what you asked for is, there are possible consequences to having the query processes go zombie, if the controller dies. that's my take on it anyway, and you may not be in a position to restructure bits at will.
the hardest line to type correctly is: stty erase ^H