Beefy Boxes and Bandwidth Generously Provided by pair Networks
Welcome to the Monastery
 
PerlMonks  

Re: When to use forks, when to use threads ...?

by wojtyk (Friar)
on Sep 04, 2008 at 17:09 UTC ( [id://709065]=note: print w/replies, xml ) Need Help??


in reply to When to use forks, when to use threads ...?

Well, they each certainly have their pros and cons. Personally, I opt for forking unless I have something that requires a significant amount of shared interaction (either through required shared memory or producer/consumer queues). I find the extra overhead of forking to be minimal in modern day operating systems, and the gains are significant:

- fewer memory leaks
- process isolation
- fewer race conditions
- deadlocks/starvation not an issue

Look to the latest release of Google's Chrome as a perfect example. The current threaded asynchronous model of browsers isn't cutting it, so they're moving to a process-per-tab model for all of the above reasons.

Of course, this is a personal opinion and therefore flame-worthy, so take all advice with grain of salt :)

  • Comment on Re: When to use forks, when to use threads ...?

Replies are listed 'Best First'.
Re^2: When to use forks, when to use threads ...?
by mr_mischief (Monsignor) on Sep 04, 2008 at 17:13 UTC
    It is possible to deadlock two separate processes that communicate with one another if the protocol between them is not well designed. The "fewer memory leaks" claim isn't necessarily true either, although it's easier to clean up after a small leak because the OS will reclaim the memory once the process exits.
      It is possible to deadlock two separate processes that communicate with one another if the protocol between them is not well designed
      Fair enough, but still seems like a nitpick (in that it is possible, but not probable - whereas in the case of programming threaded apps, it's a common occurrence). Also in the case of separate processes, it's much easier to recognize & debug.
      The "fewer memory leaks" claim isn't necessarily true either
      This one I stand by (in the statistical average case), particularly as the software grows more complex. As you mentioned, process termination is a natural "reset switch" that does wonders in keeping leaks down. But there's a whole host of other reasons that make memory leaks more common in threaded apps:

      - Harder to recognize and pinpoint (and thus fix...), since "memory usage" reported will be a macro-level observation instead of a process-isolated observation
      - More frequent use of resource locks (ie semaphores), reference counting, and shared resources in threaded apps, making it easier to leave resources in a limbo state
      - Maybe a redundant point, but "increased coding complexity" when using threads increases the possibility of a coding error

      As with all things involving coding, _any_ piece of code could be done "right" with no memory leaks, but I believe it's much easier to both introduce leaks AND harder to pinpoint and fix them in threaded apps vs standalone processes.

        I agree in general, although I don't have enough data to analyze anything statistically. I'm just pointing out that "less likely" is not the same as "cannot happen" and that "probably fewer" doesn't mean "definitely fewer". Your earlier post seemed absolute. The same programmer writing the same project in the two different methods seems likely to experience fewer leak and deadlock problems with processes than with threads. That's not necessarily the case, though. "Necessarily" is the key word.

Log In?
Username:
Password:

What's my password?
Create A New User
Domain Nodelet?
Node Status?
node history
Node Type: note [id://709065]
help
Chatterbox?
and the web crawler heard nothing...

How do I use this?Last hourOther CB clients
Other Users?
Others wandering the Monastery: (5)
As of 2024-04-18 05:15 GMT
Sections?
Information?
Find Nodes?
Leftovers?
    Voting Booth?

    No recent polls found