stevieb has asked for the wisdom of the Perl Monks concerning the following question:

Perlmonks is usually slow on weekends, so I thought I'd fire off another question.

I have an SQLite database that will be used to store data every 3-10 seconds. This will be happening on a Raspberry Pi, so the 'disk' is an SD card. What I'm hoping to do is load that file into memory, and use it there, then create an event to write it to disk every X minutes, making it permanent (if data is somehow lost in the meantime, it isn't critical).

Everything writing to the DB will be in Perl, and all processes will be within a single process umbrella.

Is this a Linux question, or can this somehow be done (copy the db file to memory) within Perl, at the onset of my application load? If so, can I please get some recommendations on a practical way of doing this, or perhaps pointers to modules that may already do this?

Replies are listed 'Best First'.
Re: Putting an SQLite DB file into memory
by clueless newbie (Deacon) on Sep 25, 2016 at 23:12 UTC
    Hi, stevieb. I do this all the time, connect to a memory db as
    $dbh=DBI->connect('dbi:SQLite:database=:memory:','','',{ PrintError=>0 +,RaiseError=>1 });
    then use
    $dbh->sqlite_backup_from_file( $filename ) or $dbh->sqlite_backup_to_file( $filename )
    as desired.

      This works brilliantly, and the feature is built right in! This is more than what I was looking for. Thanks clueless newbie :)

      I'm finally writing the tests for my app, and this works perfectly for testing, instead of copying a db file then removing it in each test file.

      I keep all configuration inside of the db as well as the data (generated every five seconds), so I think what I'm going to do is have the config aspect in a separate db on disk, and the streaming data in memory which I back up with a timed event.

      For those wondering, this is an application that gathers grow room environment information (temp, humidity so far, to also include water level, pH, EC/PPM etc for hydroponics, and soil moisture content data for soil) and writes it to what could be a volatile database. However, the configuration aspect of the system turns on/off electrical systems (lights, fans, humidifiers), so that part of the db must be reliably on disk, in case of a system crash/reboot, it needs to remember everything (timing, on/off configuration etc).

Re: Putting an SQLite DB file into memory
by tybalt89 (Prior) on Sep 25, 2016 at 23:14 UTC

    Does the Raspberry Pi have /tmp on a tmpfs file system? If so, just put your working DB there, and every so often (in a separate process) do:

    sqlite3 /tmp/workingDB .dump | sqlite3 /actualfilesystem/newpermanentD +B

    There's even no need to close your connection from perl.

      I run Slackware on my Pi's, and of course it supports tmpfs. I run one Pi with MJPEG-streamer and with the cam in 'burst' mode dumps the files to a tmpfs - all in memory and doesn't touch the SD Card. Works really well. Also been running over a year now with no issues. tmpfs is the way to go with this.


Re: Putting an SQLite DB file into memory
by Laurent_R (Canon) on Sep 26, 2016 at 06:16 UTC
    Not a direct answer to your question, but I am wondering what the life time of your SD card will be if you write to it every 3-10 seconds. A hard disk might be a more reliable and more robust solution.

      In general, modern SD cards can be fully re-written over 100,000 times before failure, and some 1,000,000 times. That's a minimum of a decade of hourly full-overwrites before you start getting problems.

      Buy a decent quality SD card and you should get a life time warrenty (see "Short Answer From Lexar")

      With the rise and rise of 'Social' network sites: 'Computers are making people easier to use everyday'
      Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
      "Science is about questioning the status quo. Questioning authority". I knew I was on the right track :)
      In the absence of evidence, opinion is indistinguishable from prejudice.
        Well, the OP mentioned:
        I have an SQLite database that will be used to store data every 3-10 seconds.
        If we take a mean value of 6 seconds, that's 10 times per minute, or 600 times per hour, or 14,400 times per day, so that you reach 100,000 writes within a week or 1,000,000 writes in less than three months. That's not quite a decade.

        I have duly noticed that you say "fully re-written" and I don't know what's the significance of "fully", but I suspect that a database file is completely rewritten each time it is updated (which of course doesn't mean that the card is fully rewritten, but the database file may have a size which is significant compared to the full card size).

        Or am I missing something?