|The stupid question is the question not asked|
Threads and DBD::SQLite?by BerntB (Deacon)
|on Dec 15, 2013 at 03:45 UTC||Need Help??|
BerntB has asked for the
wisdom of the Perl Monks concerning the following question:
Edit: Could just using (very large) MEMORY tables in MySQL solve this trivially? :-)
There is an unpleasant old hack at work, which I'd like to replace with a nice Perl server.
The basic need is very quick read access to a bunch of GB of data over sockets (i.e. no disk). The data is too big to duplicate in multiple instances. I am thinking of in-memory SQLite tables (read access only), initialized at startup. SQLite supports opening and sharing the same in-memory table ("shared cache"), but only for the same process.
The problem is the "in the same process", since that implies threads! That is scary, as is using DBD features not commented on in the DBD::SQLite documentation.
That is the background. There are three problems which would take me time to find out and I hope people here can answer directly:
Notes added afterwards: Yes, I do need multi process/thread support, a previous query might take too much time to wait for (and there will be processor cores around). The process startup will be slow, since there is a lot of data. The data is already in a serious DB, queries there are too slow. The present query format would be easy to translate to (SQLite's) SQL. To run the queries on tie:d data stored in shared memory would need unpacking of all the data to check success; just too slow.
I do this as a hobby instead of work partly so I don't have to try to motivate it, partly because I don't know if this will work and partly so I can put it on CPAN (or just as a code example on Perlmonks, if it is short enough).