http://www.perlmonks.org?node_id=682612


in reply to Re^5: Multiple write locking for BerkeleyDB
in thread Multiple write locking for BerkeleyDB

I did a second run with just updates, with placeholders and separate tables for in/out/cross. The rate went up but only to about 45k/min. The server is running other mysql jobs so that might be the reason. Thanks for your suggestions anyway.
  • Comment on Re^6: Multiple write locking for BerkeleyDB

Replies are listed 'Best First'.
Re^7: Multiple write locking for BerkeleyDB
by sgifford (Prior) on Apr 24, 2008 at 14:33 UTC
    I suspect you will get much faster results if you update just one table with all three pieces of data in it. Also, as others have said, make sure you have an index on whatever you use in your WHERE clause for the UPDATE. And, if your count_cross column is the sum of count_in and count_out, just leave it out and calculate it when you need it; this will help (at least a little) if reads are much less frequent than writes.

    In my tests elsewhere in this thread, I was able to get about 132K updates/min out of MySQL on a 2-CPU 1GHz Pentium III, so unless your machine is much slower there is probably some performance to be found somewhere.