How fast is fast?by Logicus
|on Aug 06, 2011 at 20:05 UTC||Need Help??|
This is not so much a question as a meditation. But then again maybe it is a question but one using some clever obfuscation, who knows?
Anyhoo... grab a cuppa and settle back...
How fast is fast?
Let's say you want to setup a site and your aim is to be sure that it will handle upto 1 million hits per hour. So you shop around and you get yourself a dedicated server with 24 processor cores, 12gb ram, 100mpbs pipe and 15TB bandwidth for about $259 / month. (Yeh I know a place...)
Anyway so you do a little maths and divide 3600 seconds by 1,000,000 hits and get the answer of 0.0036. So here's your target, get your code to build a request in that length of time, and you can do 1,000,000 in an hour. That is of course neglecting the multi-cores you just ordered each of which can be simultaneously producing a result, so your server is apparently capable of 24,000,000 hits per hour. Less of course processor time for O/S, DB, and other sundry odds and sods. So let's call it 20,000,000 hits per hour.
But how big is a hit?
With broadband now the defacto standard, the days of optimising sites so that they load in less than 10 seconds on a standard 56K modem are a becoming a distant memory. Lets say you have about 50Kb for the headers, cookies, and html + the high quality jpg images, maybe some flash animations, video content all of which cost bandwidth to send out.
So let's be conservative with the estimate of average KB / hit, and suggest a value of about 200Kb total.
20,000,000 * 200KB = 4,000 GB (4TB)
Oh dear... running at full tilt your super fast optimised down to the bone program has just served up so many hits so fast that you have run out of bandwidth in a mere 3hrs and 45minutes.
And that was assuming you had a pipe wide enough to fit all that data through it at that speed!
So how big was the pipe again?
The server comes with a 100Mbps pipe with a throughput capability of around or just under 12.5 MB/s.
12,500 KB / 200KB = 62.5 average hits per second
More / wider pipes!!
So having worked out that the pipe can only sustain a throughput of around 62.5 hits / second * 60 * 60 = 225,000 / hour, you realise you need a bigger pipe!
So you plump for the 1Gbps pipe, 10x the bandwidth with max theoretical speed of 2,250,000 / average hits / hour.
But wait a minute, the code is capable of running 20,000,000 hits per hour because we optimised it so heavily. So even at maximum throughput on the pipe on a dedicated server with unlimited bandwidth, the CPU(s) is/are nearly 90% idle!
So how fast does it need to be?
So we have just determined that a server with even a 1Gbps pipe can only do around 625 hits per second before it simply hits the bandwidth limit and can go no faster.
So how many cores have you got?
In the example above the server had 24 cores, of which I suggested 4 would be busy with odds and sods leaving the other 20 cores free for rendering activity.
625 requests / 20 cores = 31.25 per core per second
1 / 31.25 = 0.032 seconds or about 10x longer than our hyper-optimised version.
625 requests * 200KB * 60 * 60 = 450,000,000 KB = 450GB / hour
With a 15TB bandwidth cap your going to run out of bandwidth again in about 33 hours and 20 minutes.
But recall our spec only called for 1,000,000 hits per hour, which was around 277 per second.
277 * 200 * 60 * 60 = 199.44GB / hour
15TB / 199.44GB = 75.21 hours.
Clearly more bandwidth is needed, or the server will run out just 3 days into the month!
Therefore bandwidth availability is far more important than server speed and available cores in this day and age, a trend which is likely only to increase as web-apps get even more complex and graphically polished and include high bandwidth overheads like streaming video and audio content.
The processing power available today is huge! Also I recently saw a video of Larry talking about Perl 6 when he mentioned utilising CUDA graphics hardware to do extra processing. Things like regexes which traditionally have been considered to be expensive on CPU time as each character must be tested against the matching rules, can be done in parallel on the graphics card at super fast speed.
Nvidia Tesla cards (and tesla enabled servers), can have as many as 512 processing cores available!
With that much processor power backed up inside the bottleneck of available bandwidth, questions like how fast is fast? and is my code fast enough? are soon set to exist along side questions like, will my database of 100,000 records fit onto the server's 10MB hard disc, or do I need to drop the first two characters of the "Year" column to save space?
In short, don't worry about it too much. If your using mod_perl2 (or similar) and your code is producing results in less than 0.3 seconds your pretty much good to go! Remember, your time is several orders of magnitude more valuable and expensive than extra server power, and considering the range upto which servers can be built now (think deep-blue, Tianhe-1A, etc), the sky really is the limit. Remember the Cray? Would you believe it only ran at 80Mhz? I'm pretty sure I've got a wristwatch somewhere around here that can out perform that now.
Yesterdays super computer is tommorows laptop! And yesterday's code which absolutely must be optimised to the bone for the sake of efficiency is tommorow's day off having fun. (unless of course you enjoy debugging and code optimisation, each to their own I guess)