Beefy Boxes and Bandwidth Generously Provided by pair Networks
We don't bite newbies here... much

Re: DBI::SQLite slowness

by Cristoforo (Curate)
on Sep 20, 2013 at 02:19 UTC ( #1054932=note: print w/replies, xml ) Need Help??

in reply to DBI::SQLite slowness

It is slow because you have AutoCommit set to 1. It is committing for every insert. Just change that to 0 and $dbh->commit; after the foreach loop.

Replies are listed 'Best First'.
Re^2: DBI::SQLite slowness
by Endless (Beadle) on Sep 20, 2013 at 12:23 UTC
    Brilliant! With that little fix, my speed is up to 2022 per second; that's almost workable, and I understand what was happening. Now time to start looking through the other suggestions.
      This is what I get on an Atom eee pc (1.6Ghz), after I removed 
      use v5.16.0;
      and changed 
      say "Total time: ", (time - $start); # 180 seconds 
      print "Total time: ", (time - $start); # 180 seconds
      time perl
      Total time: 5
      real	0m5.348s
      user	0m0.360s
      sys	0m0.820s
    : Gestion des contrats, des dossiers contentieux et des sinistres d'assurance
        I've never heard reports that 5.16.0 will significantly slow a program, or that say is so much slower. What's going on here?

      Well, 200 millions records at a rate of 2000 per second, that's still 100,000 seconds, or almost 28 hours. That's still pretty long, isn't-it? Having said that, you may be able to live with that, a full day of processing is still manageable for a number of cases. Beware, though, that the rate might slow down as you database grows larger.

      If you are really only looking for filtering out duplicates, the ideas discussed by BrowserUk are probably much better than using a database.

Log In?

What's my password?
Create A New User
Node Status?
node history
Node Type: note [id://1054932]
and all is quiet...

How do I use this? | Other CB clients
Other Users?
Others pondering the Monastery: (10)
As of 2018-05-21 15:34 GMT
Find Nodes?
    Voting Booth?