It is possible to deadlock two separate processes that communicate with one another if the protocol between them is not well designed
Fair enough, but still seems like a nitpick (in that it is possible, but not probable - whereas in the case of programming threaded apps, it's a common occurrence). Also in the case of separate processes, it's much easier to recognize & debug.
The "fewer memory leaks" claim isn't necessarily true either
This one I stand by (in the statistical average case), particularly as the software grows more complex. As you mentioned, process termination is a natural "reset switch" that does wonders in keeping leaks down. But there's a whole host of other reasons that make memory leaks more common in threaded apps:
- Harder to recognize and pinpoint (and thus fix...), since "memory usage" reported will be a macro-level observation instead of a process-isolated observation
- More frequent use of resource locks (ie semaphores), reference counting, and shared resources in threaded apps, making it easier to leave resources in a limbo state
- Maybe a redundant point, but "increased coding complexity" when using threads increases the possibility of a coding error
As with all things involving coding, _any_ piece of code could be done "right" with no memory leaks, but I believe it's much easier to both introduce leaks AND harder to pinpoint and fix them in threaded apps vs standalone processes.