Beefy Boxes and Bandwidth Generously Provided by pair Networks
Welcome to the Monastery

Re^2: PM CSS and markup optimizations (compression++)

by cavac (Deacon)
on Jul 15, 2012 at 19:17 UTC ( #981917=note: print w/replies, xml ) Need Help??

in reply to Re: PM CSS and markup optimizations (compression++)
in thread PM CSS and markup optimizations

Now, compression could actually make a difference

I did some fairly large tests a year or two ago when i implemented it in my Maplat framework. Depending on the page size (i sometimes have fairly large data tables that i use with jQuery), compression even speeds up working within a 100MBit LAN.

If the user works through a slow link (in this test me accessing the company network from home via an encrypted VPN tunnel), even small pages load quite a lot faster. Server performance degrades only slightly (e.g. the time the server requires to send out the page to the client), on DSL links the pages are still faster to load.

CPU load isn't greatly increased on my server. Ok, i only get about 50.000-80.000 requests per hour, and only about 10.000 of those from clients that support compression, and it's all very database heavy.

Sorry for any bad spelling, broken formatting and missing code examples. During a slight disagreement with my bicycle (which i lost), i broke my left forearm near the elbow. I'm doing the best i can here...
  • Comment on Re^2: PM CSS and markup optimizations (compression++)

Replies are listed 'Best First'.
Re^3: PM CSS and markup optimizations (compression++)
by tye (Sage) on Jul 15, 2012 at 19:44 UTC

    Just to offer some minor clarifications. The reason I don't think compression will make a big difference on typical page load times is that I expect at PerlMonks that most slow page loads are due to slow server response, not due to large amounts to be downloaded. But, yes, for people on slow links, compression could make a big difference even when PerlMonks is being slow to respond.

    I also don't expect compression to be a source of large amounts of CPU consumption. It was just that, at a time when the web servers were often just running out of CPU, taking CPU to compress the pages was as likely to make the server response enough slower that the net result was not an improvement. Even more important, when a web server got overloaded (ran out of CPU), adding to the CPU load for every page delivered would likely mean the overload condition would linger for longer (being slow leads to a build-up of requests which makes things slow...).

    Adding compression should be just a "win" at this point.

    - tye        

Log In?

What's my password?
Create A New User
Node Status?
node history
Node Type: note [id://981917]
[Corion]: Ah, you are not on Windows, so disregard my opinion.
[vedagiri89]: in centos, how to fix this.
[choroba]: You have to find out why it happened. Was there an OS update?
[vedagiri89]: that is what i can't getting solution
[vedagiri89]: recently purchased centos server and doing migration of app
[choroba]: did you copy any dependencies?
[choroba]: all XS code needs to be recompiled

How do I use this? | Other CB clients
Other Users?
Others rifling through the Monastery: (6)
As of 2018-06-19 11:24 GMT
Find Nodes?
    Voting Booth?
    Should cpanminus be part of the standard Perl release?

    Results (113 votes). Check out past polls.