Beefy Boxes and Bandwidth Generously Provided by pair Networks
Pathologically Eclectic Rubbish Lister
 
PerlMonks  

Stress testing a webserver

by jmo (Sexton)
on Apr 29, 2008 at 11:34 UTC ( [id://683444]=perlquestion: print w/replies, xml ) Need Help??

jmo has asked for the wisdom of the Perl Monks concerning the following question:

Hi I'm trying to make a stress test and I've so far not been successful in finding one tool that can do what I'm needed. Unfortunately haven't I found a nice perl module to use as a base of building such a tool myself either. So far I've had a look at LWP::Parallel::UserAgent and HTTP::Async but neither seems to be up to the task. What I want to try to measure and get stats about is:
  • Number of requests
  • Number of concurrent requests
  • Return codes
  • Requests per second
  • Page sizes
  • Matching regexps against the result
  • Response times
  • Split up stats, ie if I make 10 000 requests I might want to be able to see stats split in 3, to see where the server started degrading


So far I've tried by making a module that @ISA LWP::Parallel::UserAgent. I'm able to override on_connect and on_return to store stats on each request: time it took, size of the data returned and return code.

However Parallel has no feature where you can say "I want to make n request per s" so it's not possible for me use it (unless rewriting it).

Now, to the question. Does anyone know of a tool, or a perl module I can build around, which can do all these things? ab(2), httperf can do parts of them but not enough.

Replies are listed 'Best First'.
Re: Stress testing a webserver
by stiller (Friar) on Apr 29, 2008 at 11:59 UTC
    Put up a link on /. and do a post mortem on the server logs... :o)

    More seriously, take a look at POE. It will enable you to make an orcestrated increase in trafic.

Re: Stress testing a webserver
by CountZero (Bishop) on Apr 29, 2008 at 12:32 UTC
    It is not Perl, but I had some succes in using the tools mentioned in this Microsift Knowledgebase article

    CountZero

    A program should be light and agile, its subroutines connected like a string of pearls. The spirit and intent of the program should be retained throughout. There should be neither too little or too much, neither needless loops nor useless variables, neither lack of structure nor overwhelming rigidity." - The Tao of Programming, 4.1 - Geoffrey James

Re: Stress testing a webserver
by perrin (Chancellor) on Apr 29, 2008 at 13:25 UTC
    Can you elaborate on why ab or httperf won't work for you?
      Sure

      In the case of ab:

      It takes the first page (or some random one?), decides the size of that page is the correct one and all that differs are error pages. In my case I'm testing a dynamic page where the size is very random. So this means ab falsely reports tons of errors due to size. Also I can't specify requests per second just the amount of concurrent requests.

      The specific case I'm trying to stress test is a broken java with grails application that always returns code 200, only way of seeing if it's broken is by size of the page thus I need a nice report of the amount of each size (or in the subclassed LWP::UA:Parallel I made intervals).

      With httperf I can specify the amount of req/s but I can't see when things start to go bad and I can't see the amount of pages with different sizes, or the spread of time it takes to serve a page (which ab can).

      I realize that my need to see the sizes is very specific to the bad code (which I lack powers to correct) I'm testing but I think the basic question I'm seeking an answer to is very general "How many requests per second can page x handle before it starts taking too long time to reply or it starts to break?"

      Currently my script outputs this, and if it wasn't for parallel doesn't seems to honour "max parallel" and I can't specify the amount of requests per s I'm pretty satisfied with the data it gathers, just that those two are critical for the tests being of any value to me:
      Configuration Max parallel: 10 Wanted requests: 1000 Max total time: 100000000s Stats general Requested url: http://myhost/mypage Made requests: 1000 Start time: Fri Apr 25 16:06:22 2008 (+621671 micro s) End time: Fri Apr 25 16:09:49 2008 (+838640 micro s) Return codes: 200: 1000 Content lengths: <= 12000 bytes: 245 <= 24000 bytes: 0 <= 32000 bytes: 755 <= 100000 bytes: 0 Time limits: <= 0.5 s: 238 <= 1 s: 300 <= 3 s: 454 <= 5 s: 0 <= 30 s: 8 Split stats: Request 1 to 333 Return codes: 200: 333 Content lengths: <= 12000 bytes: 106 <= 32000 bytes: 227 Time limits: <= 0.5 s: 104 <= 1 s: 130 <= 3 s: 96 <= 30 s: 3 Request 334 to 666 ...
        Are you sure httperf won't save the output for you?

        If I were going to roll my own, I'd skip LWP::Parallel and go with forking. There are some HTTP modules that have good performance, like HTTP::GHTTP and HTTP::MHTTP. Put those together with Parallel::ForkManager and you have a good start.

Re: Stress testing a webserver
by stvn (Monsignor) on Apr 29, 2008 at 14:16 UTC

    Take a look at Siege, we use this a lot for pounding on servers, it gives some decent stats too. I am not sure if it will give you everything you want though, but worth a look at least.

    -stvn
      Thanks!

      I'm having a look at it now. Seems to be in the direction of what I'm after since with verbose it describes every result and that is parseable...

Log In?
Username:
Password:

What's my password?
Create A New User
Domain Nodelet?
Node Status?
node history
Node Type: perlquestion [id://683444]
Approved by Corion
help
Chatterbox?
and the web crawler heard nothing...

How do I use this?Last hourOther CB clients
Other Users?
Others chilling in the Monastery: (5)
As of 2024-04-19 10:48 GMT
Sections?
Information?
Find Nodes?
Leftovers?
    Voting Booth?

    No recent polls found