Beefy Boxes and Bandwidth Generously Provided by pair Networks
Syntactic Confectionery Delight
 
PerlMonks  

Re^5: Threads and DBD::SQLite?

by BrowserUk (Pope)
on Dec 16, 2013 at 10:50 UTC ( #1067308=note: print w/ replies, xml ) Need Help??


in reply to Re^4: Threads and DBD::SQLite?
in thread Threads and DBD::SQLite?

erix, I can't thank you enough for do that.

  1. Ouch! I swear I had it in there at one point, but looking back to where this code originated, it was missing or in the wrong place back then.

    Position and frequency of commits is one of the many variables I've been playing with trying to get a handle on how they affect the overall performance. For example, if you uncomment the line:

    # $n % 10 or $dbh->commit

    The insertion takes 4 times as long.

  2. $N is a command line parameter.

    My typical test runs consist of the command: for /L %t in (1,1,4) do @1045292 -N=1e6 -T=%t -Q=100 which is probably self explanatory.

From around N$ = 10_000 and upwards, the threaded version *is* (much) faster.

Weird thing is, varying the number of threads (-T=n) makes no difference to the time required to process all the records:

C:\test>1049236 -N=1e6 -T=1 -Qsize=1000 Took 86.039947 seconds (for Populate DB with 1e6 records) Took 2.232394 seconds (for Create primary index) Took 16.897088 seconds (for Retrieve the whole lot) 1000000 Took 0.268509 seconds (for Read all 1e6 records using 1 threads) C:\test>1049236 -N=1e6 -T=4 -Qsize=1000 Took 85.957917 seconds (for Populate DB with 1e6 records) Took 3.164024 seconds (for Create primary index) Took 16.881054 seconds (for Retrieve the whole lot) 1000000 Took 0.251258 seconds (for Read all 1e6 records using 4 threads)

Leastwise, not when reading the records for the second time. Comment out the "Retrieve the whole lot" block and then things get muddy again:

C:\test>1049236 -N=1e6 -T=1 -Qsize=1000 Took 86.307525 seconds (for Populate DB with 1e6 records) Took 2.363253 seconds (for Create primary index) 1000000 Took 0.978504 seconds (for Read all 1e6 records using 1 threads) C:\test>1049236 -N=1e6 -T=4 -Qsize=1000 Took 86.182529 seconds (for Populate DB with 1e6 records) Took 3.017540 seconds (for Create primary index) 1000000 Took 1.925757 seconds (for Read all 1e6 records using 4 threads)

Once the single-threaded, first pass through the data is removed, it takes twice as long for 4 threads to read all the records as it does for 1 thread.

But then, why does it take the single-threaded, read-all 16 seconds to do so; when it only takes 1 second to start an entirely new thread, make a completely new connection, and then read-all the records?

Many questions and little consistency from which to derive answers at this point. If you can see any other obvious cock-ups I'd be grateful to hear of them.

(Ps. Any feel for how Pg would compare in similar a application?)


With the rise and rise of 'Social' network sites: 'Computers are making people easier to use everyday'
Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
"Science is about questioning the status quo. Questioning authority".
In the absence of evidence, opinion is indistinguishable from prejudice.
blockquote


Comment on Re^5: Threads and DBD::SQLite?
Select or Download Code
Re^6: Threads and DBD::SQLite?
by erix (Vicar) on Dec 16, 2013 at 23:41 UTC

    Very interesting: I find that, with threads, postgres is not even so much slower. Like you, I am a bit suspicious of the results, but I see no faults in the code.

    (All this is with fsync = off (in postgresql.conf), but that (as expected) only seems to influence the writing, not the reading)

    So, comparison is again between a SQLite memory table, and a postgresql normal table.

    UPDATE I believe there is an oversight in the threading block. ISTM that it doesn't retrieve enough. But I have to admit that I do not quite understand it at the moment. And I'll have to leave it at that for now (bedtime), more later, I hope...

    SQLite - 5.96526885 s Populate db with 100000 records SQLite - 0.14969087 s Create primary index SQLite - 0.54066706 s Retrieve the whole lot SQLite - 0.06327486 s Read all 100000 records using 2 threads SQLite - 6.06698895 s Populate db with 100000 records SQLite - 0.14992595 s Create primary index SQLite - 0.53164196 s Retrieve the whole lot SQLite - 0.12085700 s Read all 100000 records using 4 threads SQLite - 6.14582801 s Populate db with 100000 records SQLite - 0.15506721 s Create primary index SQLite - 0.55244803 s Retrieve the whole lot SQLite - 0.17814016 s Read all 100000 records using 8 threads Pg - 29.35742998 s Populate db with 100000 records Pg - 0.53696704 s Create primary index Pg - 13.75696707 s Retrieve the whole lot Pg - 0.14590406 s Read all 100000 records using 2 threads Pg - 22.90790606 s Populate db with 100000 records Pg - 0.56523991 s Create primary index Pg - 13.84565902 s Retrieve the whole lot Pg - 0.13429594 s Read all 100000 records using 4 threads Pg - 19.43180919 s Populate db with 100000 records Pg - 0.53746796 s Create primary index Pg - 13.00171399 s Retrieve the whole lot Pg - 0.19895601 s Read all 100000 records using 8 threads

    (This is PostgreSQL 9.4devel, btw)

      Thanks again erix. Those are some intriguing numbers.

      ISTM that it doesn't retrieve enough.

      Have you seen/disabled this from the posted code?:

      $Q->nq( $_ ) for (shuffle 1 .. $N)[ 0 .. $R ]; #.....................................^^^^^^^^^^^

      With the rise and rise of 'Social' network sites: 'Computers are making people easier to use everyday'
      Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
      "Science is about questioning the status quo. Questioning authority".
      In the absence of evidence, opinion is indistinguishable from prejudice.

        OK, so my numbers for "Read all 100000 records using 4 threads" were really

        " Read ", $R * 100 / $N, " % of all records using 4 threads"

        that means the test was doing 0.1 % of what the output-line was saying.

        Here is a run without that mistake:

        # The offending code line changed to: # $Q->nq( $_ ) for (shuffle 1 .. $N); SQLite - 5.90120077 s - Populate db with 100000 rows SQLite - 0.15266800 s - Create primary index SQLite - 0.53022408 s - Read all 100000 rows, unthreaded SQLite - 7.23427510 s - Read all 100000 rows, 4 threads Pg - 23.40106297 s - Populate db with 100000 rows Pg - 0.53076911 s - Create primary index Pg - 13.71553111 s - Read all 100000 rows, unthreaded Pg - 7.40499115 s - Read all 100000 rows, 4 threads

      According to this the syntax for using uri filenames has changed.

      Using 'dbi:SQLite:dbname=file:memdb2?mode=memory&cache=shared' still creates a file; although the file is empty and the cache=shared appears to work.

      The new syntax is apparently: 'dbi:SQLite:uri=file:memdb2?mode=memory&cache=shared'. However, now the shared cache no longer seems to work.

      I'm bored with fighting dumb reasoning, so I'm putting this back in the deep freeze.


      With the rise and rise of 'Social' network sites: 'Computers are making people easier to use everyday'
      Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
      "Science is about questioning the status quo. Questioning authority".
      In the absence of evidence, opinion is indistinguishable from prejudice.
Re^6: Threads and DBD::SQLite?
by BerntB (Deacon) on Dec 17, 2013 at 02:35 UTC
    I must thank you again BrowserUK, the question really was found by the right person! If you see me around, I'll buy you a beer/coffee/tea. :-) On one hand, these questions taught me to recommend the MySQL MEMORY storage engine at work. On the other hand, I'm intrigued and will look seriously at Perl threading sometime.

Log In?
Username:
Password:

What's my password?
Create A New User
Node Status?
node history
Node Type: note [id://1067308]
help
Chatterbox?
and the web crawler heard nothing...

How do I use this? | Other CB clients
Other Users?
Others cooling their heels in the Monastery: (4)
As of 2014-07-11 23:19 GMT
Sections?
Information?
Find Nodes?
Leftovers?
    Voting Booth?

    When choosing user names for websites, I prefer to use:








    Results (236 votes), past polls