http://www.perlmonks.org?node_id=1030461


in reply to Re^3: Parallel::Forkmanager and large hash, running out of memory
in thread Parallel::Forkmanager and large hash, running out of memory

Got it...I guess I need to lay off the crack pipe...Thanks!

Of course, this brings me to the next challenge: concurrency. Since I will be doing this work in parallel, I will have to see how SQLite handles concurrency...or perhaps lean over towards MySQL...

Thanks again for the help to you (and all others that responded...)

  • Comment on Re^4: Parallel::Forkmanager and large hash, running out of memory

Replies are listed 'Best First'.
Re^5: Parallel::Forkmanager and large hash, running out of memory
by pokki (Monk) on Apr 24, 2013 at 18:37 UTC

    The relevant doc is here.

    The tl;dr: version is: any number of processes can read at the same time. When a process wants to write to the database, the engine is notified, waits for current readers to finish, then gives exclusive access to the writer process. No one can read while a write is occurring. When the write is finished, everybody can start reading again.