We don't bite newbies here... much | |
PerlMonks |
Re: memory persistanceby sundialsvc4 (Abbot) |
on Aug 26, 2011 at 13:56 UTC ( [id://922635]=note: print w/replies, xml ) | Need Help?? |
To my way of thinking, “IPC sharing” modules are intended, as the acronym implies, for Inter-Process Communication. In other words, two or more processes are running at the same time and they need to communicate with one another reliably in real-time. Which is not the case here. What you seem to be describing is a persistence mechanism, for preserving data between runs that occur many hours apart. “Storing the data in memory” would, in my view, be entirely unsuitable. SQLite is an excellent suggestion because, even though it is “a disk file,” it is also a database in that you can keep any number of tables inside the thing, and you can run queries against it, all without involving a database server. (If you do have a database server at-hand, and a friendly DBA, you might also consider using a database on that server for this and any similar purposes.) You might even consider using this storage, not merely for “the parameters needed for the next run,” but for a log about previous runs that have been made, and/or for anything that you might need to support re-runs.
If you do use SQLite, here’s one important caveat from experience: use transactions. If you don’t, SQLite will obediently verify every single disk-write by re-reading the data, as it has been programmed to do, thus dropping performance to one-half disk drive speeds. Wrap even small requests in a transaction so that SQLite will know that it can buffer the data in-memory for a while and do “lazy writes,” with a dramatic improvement in performance. Also note that SQLite (IMHO...) really isn’t architected with “many simultaneous writers” in mind.
In Section
Seekers of Perl Wisdom
|
|