in reply to Re: Closing stdout on CGI early.
in thread Closing stdout on CGI early.

My solution will be to use database - I am a database guy...;-)

When you want another process to handle something, delegate: write in some table parameters of what needs to be done, and set status to 'Scheduled'. Then, "boss" process is done and can return.
Later (nightly), you can run "subordinate" script via cron. It will read all task sheduled, try to process them (changing status to "In Process', and after completion, change status again, to 'OK' or 'Failure' (and saving status or sending email or something you need), and will exit when no more task are scheduled. I do not have expereince with Apache, if starting new process via cron is more expensive, or you need results to be processes ASAP, just leave "subordinate" running all the time.

Later "boss" process can check status of tasks scheduled (you may sort them by userID or something), and even try to re-schedule rejected tasks (after fixing error).

Database transactions will handle all record locking.

Does it make sense? It plan to use something like that in my system later, not now (so no code right now, sorry).

Can you monks with expertise in Apache tell me what is better method (less resorce hog) to run "subordinate": all the time (without need to start new process), sleeping some time, or via cron, requiring to start new process occasionally. I understand it is a trade-off, what will be the guidelines for right decision?

To make errors is human. But to make million errors per second, you need a computer.

  • Comment on Re: Closing stdout on CGI early. - how better to start daemon?