I think that the best way to estimate the page-load time of a web page is through simulation, or even deduction, not experiment. Experimental results are too-heavily biased by the properties of the network you’re on ... which, in the case of in-house testing, is much too fast. (Developers always have the fastest machines.) The behavior of modern-day AJAX-driven web sites makes it difficult to produce truly-useful experimental results.
One of the best and cheapest performance-improvements I managed to pull off, that really made a difference, was to observe (with Firebug) that a lot of pages in one site had been generated originally using Microsoft Word, and that this particular version of Word had generated a separate (but identical) image for every bullet and even for horizontal lines. Even though the image content was identical, a separate file-name had been generated, hence there were many dozens of downloads of the same information ... and many duplicate copies of this data in the database(!) that served them. It also served to “flood out” the client-side cache so that it wound up being full of lots of copies of these images (which would never be referred-to again). A Perl script to locate and consolidate the identical images, then pass through th HTML to substitute file-names, had an enormous positive impact on the whole shebang.