But the other way is fine? That is, function1 may crash and take with it the entire process?
What you could do is for each line, fork twice. First child calls function1, second child calls function2. Both children exit afterwards. Parent waits till children are done before reading the next batch. Then the process will not be stopped if either function crashes on a batch. | [reply] [Watch: Dir/Any] [d/l] [select] |
But here "function1" and "function2" are not processing just for every record. they are processing first record and then join the results to second record after processing and so on...so in short i cannot kill the process while returning to start of the loop.
| [reply] [Watch: Dir/Any] |
Then your setup is flawed. You do not want to fork in each iteration.
Why not fork before opening the file, and have each subprocess iterate over the file and process it? Or just write two programs?
Of course, if there even more non-obvious requirements, list them first ;-)
| [reply] [Watch: Dir/Any] |