Erlang "threads" are also called Erlang "processes" and they neither have shared data/state nor wholesale copies of data.
Scant, simplistic and largely inaccurate since the release of the R11B in 2006. Rather more inaccurate since the release of R13B.
Erlang "processes" and Erlang "threads" are entirely different beasts. Indeed, there is no such concept as an Erlang thread as such. And, like Java green threads, (and Coros) Erlang processes were (and still are, but I'll get back to that), entirely user-space entities and as such are neither processes nor threads in the conventional (OS) sense.
However, in circa. 2004, the lack of SMP scalability was recognised as a significant limitation, and development was started to address that which culminated in the R11* releases of the VM. The approach taken was to start one (kernel) thread per core, feeding off a single shared (note that word) queue (and that one also). Each thread is a separate interpreter that take messages off of the shared queue and executes them until they either a) finish; b) block; c) error.
Now it was quickly realised that the shared queue (and the associated locking) was a significant drag on performance, so having got it working, they set about improving the performance. To this end, they developed the R13B VM which uses separate queues for each interpreter, thus avoiding (some) of the lock contention. To achieve this, they had to add "process migration logic". That is Erlang "processes" not OS processes. And "migration logic", means moving "processes" to other queues if the current queue has more than some pre-configured maximum number of "runnable processes" (Again; Erlang "processes", not OS processes!).
Now back to your "no wholesale copying of data". As Erlang is a functional language--with immutable variables--every time you send a message to a "process" that causes it to (for example) append a character to a string; or push to an array; or add, change or remove a key/value pair to a hash; or add, remove or (say) reverse the order of elements within a list; it (at least notionally) copies the entire data structure.
Of course, we know that in reality such copying is impractical in the real world, and like (for example) Haskell, that notional immutability is enforced at the language level, but is done by "smoke&mirrors" at the implementation level. So, Erlang's "message queues" are basically, simply linked-lists of heap-allocated memory structures (as might be used in C (I wonder what language Erlang is implemented in?)). In other words--shared state at the OS level.
And, should you doubt any of this, please download and read: this pdf
Now, does any of that sound familiar?
One thread per core. Queue(s) to facilitate communications. The absence of direct access to shared state. Internal locking.
Does that sound anything like the iThreads model I've been taking about?
I chose Erlang as one of my examples, because I happen to have made a something of a study of it.
So, iThreads are actually more like fork than like any of these things that are sometimes calls "threads" in other languages.
Congratulations on dropping the phrase "fork emulation". Threading in Erlang is quite different from threading in C. Why should threading in Perl have to be the same?
And doesn't the above, (or the pdf if you bothered) sound a lot like the very type of thread-pool + queues mechanism I (amongst other) have been advocating here for years?
I tend to focus more on the details of communication between the parts (solid interfaces lead to solid systems) and so don't tend to reach for the convenient "share a few variables willy, nilly" framework. But iThreads have advantages and can be used effectively even in Unix
If I didn't know better, I'd suggest that we might be singing from the same song sheet--though perhaps with different accents.
Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
"Science is about questioning the status quo. Questioning authority".
In the absence of evidence, opinion is indistinguishable from prejudice.