http://www.perlmonks.org?node_id=933048


in reply to Re^8: aXML vs TT2
in thread aXML vs TT2

1000req/s is good, though what's the backend doing then? Also, what kind of concurrency have you got going with that benchmark? I've developed a few proprietary high-availability/high-throughput web servers, and you learn very quickly that what you thought was great performance can fall down around you when you start passing -c to ab.

Testing one such thing I wrote now; it's used in production with months of uptime (i.e. it's very stable and feature-complete), and hooks into a variety of networks and network types.

At 1000 concurrent requests, it's serving about 18,000req/s on my not-too-impressive Core i5-450M (notebook) -- and that's because it's doing nothing but serving a very simple static page (and because it's written in C). The most complicated thing it's doing is reading the request, dispatching it to the correct endpoint, and writing the response into a buffer to be shunted onto the network.

My point (other than comparing apples and oranges) is that, being the underlying framework, you can never be too fast, as you don't know what someone's going to put on top of it. Serving tens of thousands of requests per second is a nice baseline. Frankly, so is serving nearly a thousand, but just a reminder not to settle for "good enough".

Anne

Replies are listed 'Best First'.
Re^10: aXML vs TT2
by Logicus (Initiate) on Oct 22, 2011 at 10:31 UTC

    The benchmark script looks like this:

    use LWP::Simple; use Bench; fork(); #2 fork(); #4 fork(); #8 fork(); #16 &Bench::Start; for (1..1000) { $content = get("http://localhost:5000/"); print "Couldn't get it!" unless defined $content; } print &Bench::EndReport;

    The number of requests per second varies depending on things like the page length and complexity, how deep the tag nesting goes and how much data is in the database, with the database lookup apparently being the largest of these factors.

    The slowest page I have renders at about 500-600 odd per second whilst the fastest which is basically almost a static page (its based on the same template but with only a handful of active tags, and one relatively small lookup), renders well in excess of 1000 per second.

    Today's "good enough" is "good enough x 2" in 18 months from now, "good enough x 4" in 3.5 years.. and so on. When you consider that I actually originally wrote aXML nearly 5 years ago, I had already seen significant improvements in it's processing speed prior to my recent breakthrough using plack.

    Also consider that we are on the verge of seeing servers with 100+ processor cores on a single chip. Granted they will only operate at a frequency of around 7-800mhz each, but still that is going to be so fast the current setup will be more than efficient enough for just about anything short of the next ebay or youtube, with once again the database I/O being the weakest link.

    Oh P.S, the version of aXML I am running on my server which is probably around about version 4, whilst the new one would be version 7, runs for months and months without downtime on a tiny little slice server with 256mb ram. aXML used to be processor inefficient, but it has always been good on memory usage.