Beefy Boxes and Bandwidth Generously Provided by pair Networks
laziness, impatience, and hubris
 
PerlMonks  

Re: Load/stress-testing PostgreSQL queries

by erix (Vicar)
on Mar 02, 2013 at 08:40 UTC ( #1021401=note: print w/ replies, xml ) Need Help??


in reply to Load/stress-testing PostgreSQL queries

I'd start with the benchmark tool that comes with postgresql: pgbench.

pgbench can be used immediately with several pre-set scenario's (e.g. write-heavy, readonly), but it can also be given custom SQL to test your own data/queries.

Or perhaps start right away with Greg Smith's pgbench-tools, which is built on top of pgbench. It produces output in useful series of text, graphs, html. Very nice.

I've heard good things about Tsung but never used it. It would seem that the pgbench-tools are the obvious first thing to run as they require almost no work to get going, and will provide you with a good first idea about your system's performance.

Lots of tuning information in the mailing lists; especially pgsql-performance (and perhaps, pgsql-hackers). The search interface on the postgresql website isn't very good but the mbox files are available too.

http://www.postgresql.org/list/pgsql-performance/

http://www.postgresql.org/list/

(Here is a simple pgbench example, small-scale, readonly. Add command-line connection details if necessary. (this output is from version 9.3devel, 9.2 output is similar (slightly more verbose))).

# # initialise small table... # $ pgbench -h /tmp -i -s 10 creating tables... 100000 of 1000000 tuples (10%) done (elapsed 0.10 s, remaining 0.94 s) +. 200000 of 1000000 tuples (20%) done (elapsed 0.21 s, remaining 0.85 s) +. 300000 of 1000000 tuples (30%) done (elapsed 0.31 s, remaining 0.71 s) +. 400000 of 1000000 tuples (40%) done (elapsed 0.40 s, remaining 0.59 s) +. 500000 of 1000000 tuples (50%) done (elapsed 0.49 s, remaining 0.49 s) +. 600000 of 1000000 tuples (60%) done (elapsed 0.59 s, remaining 0.39 s) +. 700000 of 1000000 tuples (70%) done (elapsed 0.68 s, remaining 0.29 s) +. 800000 of 1000000 tuples (80%) done (elapsed 0.77 s, remaining 0.19 s) +. 900000 of 1000000 tuples (90%) done (elapsed 0.86 s, remaining 0.10 s) +. 1000000 of 1000000 tuples (100%) done (elapsed 0.96 s, remaining 0.00 +s). vacuum... set primary keys... done. # # readonly, 90 concurrent clients, 5 minutes run # $ pgbench -h /tmp -n -S -T 300 -c 90 transaction type: SELECT only scaling factor: 10 query mode: simple number of clients: 90 number of threads: 1 duration: 300 s number of transactions actually processed: 6905843 tps = 23012.221067 (including connections establishing) tps = 23044.131737 (excluding connections establishing)

edit: Of course, quite possibly you have to use a much larger initialisation-value (the 'scale', -s, in the init invocation); and the performance obviously gets *much* worse when the tables do not fit in memory anymore or write-mostly is used as opposed to readonly, as above.


Comment on Re: Load/stress-testing PostgreSQL queries
Download Code

Log In?
Username:
Password:

What's my password?
Create A New User
Node Status?
node history
Node Type: note [id://1021401]
help
Chatterbox?
and the web crawler heard nothing...

How do I use this? | Other CB clients
Other Users?
Others avoiding work at the Monastery: (13)
As of 2014-12-18 10:33 GMT
Sections?
Information?
Find Nodes?
Leftovers?
    Voting Booth?

    Is guessing a good strategy for surviving in the IT business?





    Results (49 votes), past polls