Beefy Boxes and Bandwidth Generously Provided by pair Networks
good chemistry is complicated,
and a little bit messy -LW

Re: DBI and DBD::Pg: Assessing Performance

by moritz (Cardinal)
on Mar 15, 2012 at 08:09 UTC ( #959739=note: print w/replies, xml ) Need Help??

in reply to DBI and DBD::Pg: Assessing Performance

You can profile the Perl side with Devel::NYTProf (database access should show up as subroutine execute or st::execute or so), and also log execution time of the statements at DB level, then compare your findings. If the Perl side takes much longer than what posgres logs, you know it's DBI or DBD::Pg.

How many log entries are written typically? is the IO load high on the server? Are you caching DB connections? (reconnecting to the DB for each log event would be rather slow; doing a single insert should not be very slow).

Replies are listed 'Best First'.
Re^2: DBI and DBD::Pg: Assessing Performance
by traiano (Novice) on Mar 15, 2012 at 10:21 UTC
    Hi moritz

    Thanks, I will wrap my head around this module and let you know what I find.

    The postgresql server does an average of between 80 and 120 transactions per second 91:1 correspondence between transactions and logged packets, in this case).

    The IO load on the server doesn't see to be problematic, for example, here is a typical vmstat output during a high activity period:

    root@rad1 /usr/local/etc/radiator# vmstat 1 procs memory page disk faults cpu

    r b w avm fre flt re pi po fr sr mf0 in sy cs us sy id

    0 0 0 1787M 624M 247 0 0 0 230 0 2 119 1838 1240 0 0 99

    0 0 0 1787M 624M 10 0 0 0 4 0 7 148 2069 1487 1 0 99

    0 0 0 1787M 624M 1 0 0 0 0 0 2 120 1498 1247 0 0 99

    0 0 0 1794M 622M 2280 0 0 0 1893 0 0 37 1841 759 0 0 100

    0 0 0 1794M 622M 8178 0 0 0 8170 0 9 59 7580 1470 1 0 99

    0 0 0 1794M 622M 4931 0 0 0 4689 0 0 62 4527 928 0 1 99

      >{PERL DBI, DBD}-->{Postgresql}{underlying server OS and >hardware}
      If you can emulate your 80-120 transactions at this layer, you may found if it is postgres or not at least. I mean test script on this machine will show you something.

      On my old notepc and PostgreSQL 9.0.3, I run my test script and it shows 120 transaction exceeds one second. This is the result of defualt postgres configuration.

      count=120 With AutoCommit OFF=0.339 With AutoComit ON time=1.266
      And test code below.

      If you tune postgres.conf and optimize your PostgreSQL server, the result will change. For instance, if trun fsync option to "off", the result differs.

      With AutoCommit OFF=0.224 With AutoComit ON time=0.259
      *THIS OPTION SHOULD BE HANDLED CAREFULLY*. It is very veyr villain to trun off fsync option to off carelessly like me. It is not for accounting thing but it may show you where the problem is. Postgress has many ways for tuning it proper. I hope it helps you to find the problem.

      What is the disk configuration on your database server? A single 7200 rpm hard disk drive can on average commit only about 100 transactions per second.

Log In?

What's my password?
Create A New User
Node Status?
node history
Node Type: note [id://959739]
and all is quiet...

How do I use this? | Other CB clients
Other Users?
Others making s'mores by the fire in the courtyard of the Monastery: (5)
As of 2018-05-20 18:18 GMT
Find Nodes?
    Voting Booth?