http://www.perlmonks.org?node_id=819537

iaw4 has asked for the wisdom of the Perl Monks concerning the following question:

Could someone please post a perl skeleton for fast insert/lookup for a really simple but persistent data base? Something like:
my $numofwords= 50000000; my %randomword; for (my $i=0; $i<$numofwords; ++$i) { my @chars = ( "A" .. "Z", "a" .. "z", 0 .. 9, qw(! @ $ % ^ & *) ); my $rin = join("", @chars[ map { rand @chars } ( 1 .. 5 ) ]) ) }; if (defined($randomword{$rin})) { $randomword{$rin}.=",$i"; } else { + $randomword{$rin} = $i; } }
so, my data base can be very large (50 million words), but its structure is very simple. A key can have multiple records, but they are not complex. insertion and lookup speed are important.

I have tried a version with DBI and SQLite, but insertion was very, very slow. then again, this is the first time I am using a data base, so my attempts were rather naive.

advice appreciated. /iaw

Replies are listed 'Best First'.
Re: fast simple DB (sqlite?) skeleton?
by talexb (Chancellor) on Jan 25, 2010 at 17:55 UTC

    To follow up on the answer you have already, try Googling for 'bulk load database'.

    Database designers understand that it may be necessary to load a database using thousands, millions or billions of records during setup. The INSERT command is expected to be used for singleton INSERTs, but there are certainly other options for a database load. I used bcp (bulk copy) with Ingres, going on about 12 years ago, and MySQL has a function called LOAD DATA INFILE.

    Just do a little research and you'll find a solution that works for you.

    Alex / talexb / Toronto

    Team website: Forex Chart Monkey, Forex Technical Analysis and Pickpocket Prevention

    "Groklaw is the open-source mentality applied to legal research" ~ Linus Torvalds

Re: fast simple DB (sqlite?) skeleton?
by WizardOfUz (Friar) on Jan 25, 2010 at 17:24 UTC
    I have tried a version with DBI and SQLite, but insertion was very, very slow.

    Have you read this?

      yes. I added a

      $dbh->do( "PRAGMA synchronous=OFF" );
      at the start (just after connecting), but it still ain't too fast. maybe I added it in the wrong spot, or something else went wrong?!?

      I should add that I can estimate how big my data base will be---is there a way to tell SQLite to preallocate?

      would dbm be better and faster in this sort of application? (I am on OSX.)

      /iaw

        This doesn't look too bad:

        #!/usr/bin/perl use strict; use warnings; use Benchmark ':hireswallclock'; use DBI (); my $DBH; setup_db(); Benchmark::timethis( 1, \&db_benchmark ); sub db_benchmark { my $sth = $DBH->prepare( 'INSERT INTO benchmark ( id ) VALUES ( ? +)' ); $DBH->begin_work; my $i = 0; while ( $i < 1_000_000 ) { $sth->execute( $i ); $i++; } $DBH->commit; return; } sub setup_db { $DBH = DBI->connect( "dbi:SQLite:dbname=benchmark.sqlite", "", "", { 'RaiseError' => 1 } ); if ( ! $DBH->tables( undef, undef, 'benchmark', 'TABLE' ) ) { $DBH->do( 'CREATE TABLE benchmark ( id INTEGER )' ); } return; }

        Results:

        #1 timethis 1: 10.4524 wallclock secs ( 9.35 usr + 0.16 sys = 9.51 C +PU) @ 0.11/s (n=1) #2 timethis 1: 9.9961 wallclock secs ( 9.31 usr + 0.18 sys = 9.49 CP +U) @ 0.11/s (n=1) #3 timethis 1: 9.85224 wallclock secs ( 9.29 usr + 0.17 sys = 9.46 C +PU) @ 0.11/s (n=1)

        System: AMD Athlon(tm) Dual Core Processor 4850e, HDD WD10EADS

Re: fast simple DB (sqlite?) skeleton? (PostgreSQL)
by erix (Prior) on Jan 25, 2010 at 20:40 UTC

    Speed is going to depend heavily on the disk(s).

    Using SQLite, what do you call slow? Could you please mention your harddisk type with the achieved inserts per second?

    PostgreSQL might be a option, even if it isn't as delightfully simple as SQLite is.

    For bulk load postgres has COPY, and I used it on a version of your program:

    $ cat ./generate_words.pl #!/bin/env perl use strict; my $numofwords= 50_000_000; for (my $i=0; $i<$numofwords; ++$i) { my @chars = ( "A" .. "Z", "a" .. "z", 0 .. 9, qw(! @ $ % ^ & * +) ); my $rin = join("", @chars[ map { rand @chars } ( 1 .. 5 ) ] ) +; print $rin, "\n"; }
    $ time perl ./generate_words.pl > words.txt; real 7m54.686s $ time ( < words.txt psql -d test -c " drop table if exists words; create table words (word text, id serial); copy words(word) from stdin csv; " ) real 1m16.223s

    So, generating the data took 8 minutes, loading as 50M rows just 1 minute, adding an index another 9 minutes (not shown) (on a raid10, 8 disks, I think; just SATA).

Re: fast simple DB (sqlite?) skeleton?
by repellent (Priest) on Jan 26, 2010 at 05:27 UTC
    I'm surprised no one mentioned Berkeley DB when you wanted a fast, simple, and hash-like DB.

    See On-disk hash/array data structures for db_ref().
    use warnings; use strict; my $randomword = db_ref("HASH"); my $numofwords = 50000000; my @chars = ("A" .. "Z", "a" .. "z", 0 .. 9, qw(! @ $ % ^ & *)); for my $i (0 .. $numofwords - 1) { my $rin = join("" => @chars[map { rand @chars } (1 .. 5)]); if (defined $randomword->{$rin}) { $randomword->{$rin} .= ",$i"; } else { $randomword->{$rin} = $i; } } ## do some stuff with $randomword untie %{ $randomword }; # when you're done
Re: fast simple DB (sqlite?) skeleton?
by bart (Canon) on Jan 26, 2010 at 12:10 UTC
    I have tried a version with DBI and SQLite, but insertion was very, very slow.

    Perhaps you're commiting for every insert, perhaps through autocommit? If so, turn autocommit off, keep a bit of bookkeeping and commit only every N inserts.

      I had to deal with the same problem a few years ago using Sqlite. It's a lot faster if you insert the whole data as a transaction. Use:
      BEGIN TRANSACTION; INSERT ..... COMMIT;
      This should speed up the process. Good luck.
Re: fast simple DB (sqlite?) skeleton?
by iaw4 (Monk) on Jan 26, 2010 at 21:35 UTC

    thank you everyone. I tried to put together a couple of examples on how to do this simply. (bulk insert may or may not be appropriate, because if a record already exists, I want to add to it. this means I need to lookup first, and then decide.)

    the results are actually quite interesting:
    method File Size Speed
    DBM::deep 20,191,941 83 seconds
    Berkeley 1,323,008 1 second
    DBI SQLite Synchronous=Off 950,272 21 seconds
    DBI SQLite 950,272 118 seconds

    Of course, there may be better ways to tweak the routines--I am an amateur at this. if I have made mistakes, or I should try something simpler, please correct me. for now, these are nice examples and should work as simple skeletons and guides for others, that show how to read and write to embedded data bases under perl. (the timing in the table of course does not reflect changing records; this is rare. I also have no idea how well or poorly the data bases scale.)

    #!/usr/bin/perl -w use strict; use warnings FATAL => qw{ uninitialized }; ################################################################ my $numofwords= 50000; my @randomword; ################################################################ ## not really a data base ################################################################ sub memhash { my %randomword; for (my $i=0; $i<$numofwords; ++$i) { if (defined($randomword{$i})) { $randomword{ $randomword[$i] }.=",$i"; } else { $randomword{ $randomword[$i] }.="$i"; } } } ################################################################ ## the perl module ################################################################ sub dbmdeep { use DBM::Deep; my $db = DBM::Deep->new( "dbm.db" ); for my $i (0 .. $numofwords - 1) { my $rin= $randomword[$i]; if (defined $db->{$rin}) { $db->{$rin} .= ",$i"; } else { $db->{$r +in} = $i; } } } ################################################################ ## the standard berkeley data base ################################################################ sub berkeley { use DB_File; my $filename= "berkeley.db"; my %randomword; tie %randomword, "DB_File", $filename or die "cannot open $filename: $!\n"; for my $i (0 .. $numofwords - 1) { my $rin= $randomword[$i]; if (defined $randomword{$rin}) { $randomword{$rin} .= ",$i"; } else { $randomword{$rin} = $i; } } untie %randomword; } ################################################################ ## sql is treated differently. allows multiple keys, so we ## won't combine just yet, to give it an advantage. ################################################################ sub dbisqlite_nopragma { use DBI; my $dbfile = 'sqlite_nopragma.db'; # your database file system("rm -rf $dbfile"); my $dbh = DBI->connect( # connect to your database, create +if needed "dbi:SQLite:dbname=$dbfile", # DSN: dbi, driver, database + file "", # no user "", # no password { RaiseError => 1 }, # complain if something goes + wrong ) or die $DBI::errstr; $dbh->do( "CREATE TABLE allwords ( word, record )" ); # $dbh->do( "PRAGMA synchronous=OFF" ); for (my $i=0; $i<$numofwords; ++$i) { $dbh->do( "INSERT INTO allwords VALUES ( '$randomword[$i]', '$i' ) +" ); } $dbh->disconnect(); } ################################################################ sub dbisqlite_pragma { use DBI; my $dbfile = 'sqlite_pragma.db'; # your database file system("rm -rf $dbfile"); my $dbh = DBI->connect( # connect to your database, create +if needed "dbi:SQLite:dbname=$dbfile", # DSN: dbi, driver, database + file "", # no user "", # no password { RaiseError => 1 }, # complain if something goes + wrong ) or die $DBI::errstr; $dbh->do( "CREATE TABLE allwords ( word, record )" ); $dbh->do( "PRAGMA synchronous=OFF" ); for (my $i=0; $i<$numofwords; ++$i) { $dbh->do( "INSERT INTO allwords VALUES ( '$randomword[$i]', '$i' ) +" ); } $dbh->disconnect(); } ################################################################ for (my $i=0; $i<$numofwords; ++$i) { my @chars = ( "A" .. "Z", "a" .. "z", 0 .. 9, qw(! @ $ % ^ & *) ); my $rin = join("", @chars[ map { rand @chars } ( 1 .. 5 ) ]); push(@randomword, $rin); } print STDERR "[$0: created $numofwords words for testing]\n"; system("rm -rf *.db"); # kill all prio test files use Benchmark ':hireswallclock'; ## unfortunately, this fails under O +SX Mac Pro. always gives 0 secs. print time()."\t".localtime()."\n"; print "memhash\n"; Benchmark::timethis( 1, &memhash ); print time()."\t".localtime()."\n"; print "dbmdeep\n"; Benchmark::timethis( 1, &dbmdeep); print time()."\t".localtime()."\n"; print "berkeley\n"; Benchmark::timethis( 1, &berkeley); print time()."\t".localtime()."\n"; print "sqlite nopragma\n"; Benchmark::timethis( 1, &dbisqlite_nopragma); print time()."\t".localtime()."\n"; print "sqlite pragma\n"; Benchmark::timethis( 1, &dbisqlite_pragma); print time()."\t".localtime()."\n";

    PS: Under OSX, the timer seems broken.

    I thus tried to install DBM::Deep under ubuntu linux, but here I got

    cpan[1]> install DBM::Deep CPAN: Storable loaded ok (v2.21) Going to read '/home/ivo/.cpan/Metadata' Database was generated on Tue, 26 Jan 2010 12:38:07 GMT CPAN: YAML loaded ok (v0.71) Going to read 122 yaml files from /home/ivo/.cpan/build/ CPAN: Time::HiRes loaded ok (v1.9719) DONE Restored the state of none (in 1.2153 secs) Running install for module 'DBM::Deep' Running Build for R/RK/RKINYON/DBM-Deep-1.0014.tar.gz Has already been unwrapped into directory /home/ivo/.cpan/build/DBM- +Deep-1.0014-7f188H Has already been made Running Build test * WARNING: Configuration was initially created with Module::Build version '0.280801' but we are now using version '0.3603'. If errors occur, you must re-run the Build.PL or Makefile.PL script. default not defined in test_types! at ./Build line 61 RKINYON/DBM-Deep-1.0014.tar.gz ./Build test -- NOT OK //hint// to see the cpan-testers results for installing this module, t +ry: reports RKINYON/DBM-Deep-1.0014.tar.gz Running Build install make test had returned bad status, won't install without force Failed during this command: RKINYON/DBM-Deep-1.0014.tar.gz : make_test NO
    so I ended up resorting to timing it with printed seconds under OSX. not pretty.

      Your SQLite benchmark is seriously flawed. You really, really should prepare() your INSERT statement and wrap everything into a single transaction. And please, don't use "synchronous=OFF".

      #!/usr/bin/perl use strict; use warnings; use Benchmark ':hireswallclock'; use DB_File (); use DBI (); my $DB_FILE; my $DBH_SQLITE; my @CHARS = ( 'A' .. 'Z', 'a' .. 'z', 0 .. 9, qw( ! @ $ % ^ & * +) ); my $NUM_RECORDS = 50_000; setup_dbfile(); setup_sqlite(); Benchmark::cmpthese( 10, { 'berkeley' => \&benchmark_dbfile, 'sqlite' => \&benchmark_sqlite } ); untie %$DB_FILE; sub benchmark_dbfile { my $value = 0; while ( $value < $NUM_RECORDS ) { my $key = join( '', @CHARS[ map { rand @CHARS } ( 1 .. 5 ) ] ) +; if ( exists $DB_FILE->{$key} ) { $DB_FILE->{$key} .= ",$value"; } else { $DB_FILE->{$key} = $value; } $value++; } return; } sub benchmark_sqlite { my $sth = $DBH_SQLITE->prepare( 'INSERT INTO benchmark ( key, valu +e ) VALUES ( ?, ? )' ); $DBH_SQLITE->begin_work; my $value = 0; while ( $value < $NUM_RECORDS ) { my $key = join( '', @CHARS[ map { rand @CHARS } ( 1 .. 5 ) ] ) +; $sth->execute( $key, $value ); $value++; } $DBH_SQLITE->commit; return; } sub setup_sqlite { $DBH_SQLITE = DBI->connect( 'dbi:SQLite:dbname=benchmark.sqlite', '', '', { 'RaiseError' => 1 } ); if ( ! $DBH_SQLITE->tables( undef, undef, 'benchmark', 'TABLE' ) ) + { $DBH_SQLITE->do( 'CREATE TABLE benchmark ( key VARCHAR, value +INTEGER )' ); } return; } sub setup_dbfile { my %data; tie %data, 'DB_File', 'berkeley.db' or die "$!"; $DB_FILE = \%data; return; }

      Results:

      s/iter berkeley sqlite berkeley 4.59 -- -81% sqlite 0.877 423% --

      This is not surprising, because DB_File has to perform an extra "SELECT" for every "INSERT".

        thanks a lot, WizardOfUz.

        Let's presume that I wanted to mimick the code of my berkeley data base, though, in which I first look up whether the key already exists, and then I combine values into the same key if it is already there. in this case, was my original assessment of sqlite's speed correct? that is, I first need to insert every record, so that I can look up the next record.

        if this is true, then it seems to me that the berkeley data base would be a lot faster for situations in which "bulk inserts" are not possible, but sqlite is much faster for situations in which it is.

        I have not installed mysql, but I wonder if someone could tell me how fast/slow the same code in mysql would be.

Re: fast simple DB (sqlite?) skeleton?
by tokpela (Chaplain) on Jan 26, 2010 at 11:16 UTC