|more useful options|
Stress testing a web serverby ibanix (Hermit)
|on Jan 04, 2003 at 22:44 UTC||Need Help??|
ibanix has asked for the
wisdom of the Perl Monks concerning the following question:
A few days ago I needed to do some very basic testing on a web-server: simulate a number of connections to it at once. The server was configured for a hard maximum number of active connections, and I needed to verify it would enforce that.
Here's the 10-minute script I wrote:
As you can see, the script forks a number of children and calls out to wget to handle the dirty work (this was needed in a hurry, ok?).
I noted that the script would never produce more than 40 connection/s to my webserver, no matter what I set the $run variable to.
So I'm wondering where my bottleneck is. Calling out to wget? The time it takes to fork the process? The network bandwith? The OS's speed in creating sockets?
This is a bit more than a perl problem, and I would love any feedback. For refrence, the server running this script is single-CPU, 400MHZ, with 384MB of RAM, with 10Mbit of bandwith, running FreeBSD. The server it is attempting connections to is a dual-CPU 2.8GHZ Xeon, 2GB of RAM, on a 100Mbit network. Peak bandwith used on the script server was ~80KB/s -- well below it's maximum.
$ echo '$0 & $0 &' > foo; chmod a+x foo; foo;
edited: Sun Jan 5 00:26:29 2003 by jeffa - title change (was: Limitation: perl, my code, or something else?)