Beefy Boxes and Bandwidth Generously Provided by pair Networks
Perl Monk, Perl Meditation
 
PerlMonks  

Connection: Keep-Alive and Perl

by dws (Chancellor)
on May 29, 2001 at 01:30 UTC ( #83791=perlmeditation: print w/ replies, xml ) Need Help??

While rooting about in a TCP packet trace, I discovered something that I had not heard mentioned before in discussions about building Web Applications using Perl: The "standard wisdom" on building CGIs prevents webapps from taking advantage of an HTTP/1.1 optimization.

Connection: Keep-Alive

HTTP/1.1 supports a network optimization. If a client (e.g., a browser) sends "Connection: Keep-Alive" in a request header, the web server can keep the connection persistent, so that the underlying socket can be reused to service subsequent requests. This avoids having to set up and tear down a socket every time a client needs to request something from the server. To avoid exhausting the socket pool, an HTTP/1.1-compliant server will eventually time-out a persistent connection and close the socket.

Content-Length

For connection reuse to work, a browser needs to completely digest a prior response. To do this, the browser relies on the "Content-Length" in the response header. After reading the header, the browser extracts the Content-length, and then reads the socket until it has consumed exactly that many bytes. The very next byte will be the beginning of the next response header.

What if there isn't a Content-Length in the response header? In HTTP/1.1, the final fallback method of determining the message body length is to read bytes from the socket until the socket closes. This doesn't cause a problem for the browser -- it will merely reopen a new socket if and when it needs to.

What does this have to do with Perl?

Just this: the standard wisdom on how to code Perl CGI scripts prevents web applications from taking advantage of the HTTP/1.1 Keep-Alive optimization.

The standard advice says to unbuffer STDOUT, then immediately either   print "Content-Type: text/html\n\n";
or, when using CGI.pm   print header();
followed by HTML. This has the effect of sending a response that omits a Content-Length, which means that even if the browser has sent a Keep-Alive request, the socket will be closed after the response is sent, and the browser will need to open a new socket for subsequent requests. If multiple script invocations are needed to render a page (e.g., if a page is framed, and each frame's contents are generated dynamically), the effect is multiplied.

Losing connection persistence isn't an issue during development, where the benefit of visibility onto script behavior far outweighs the barely measurable socket setup/teardown overhead. And in most low-volume situations, the socket setup/teardown overhead is relatively minimal. But in a high-latency situation, the difference in behavior can be noticeable. To understand why, we need to dig a bit.

Counting Packets

When a browser requests a page from a web server, the transaction takes a minimum of two TCP packets. One to package up the HTTP GET request, and one to package up the response from the server. (A large response gets split across multiple packets). But this doesn't count the packet overhead for opening (and later closing) the socket. Establishing a TCP connection takes 3 packets; closing the connection takes 4 packets (consult Stevens for the grisly details). Each request packet requires an acknowledgement, but this is typically included in the reply (data) packet. (There are some other tricks for box-carring ACKs. I'm going to beg forgiveness and ignore them, which will throw the math below off just a bit.)

A Web Application Scenario

To see why Keep-Alive might matter, consider a simple web application that consists of a frameset and two frames, all of which are generated dynamically. One of the frames includes an image. It takes 4 HTTP requests to get all of the pieces into a browser.

Without Keep-Alive, these 4 HTTP requests (on the same socket) require a minimum of (3 + 2 + 4) * 4 = 36 packets. With Keep-Alive, 3 + (2 * 4) + 4 = 15 packets are needed, the final 4 of which are deferred until the connection either times out or is closed by the browser.

In reality, the math doesn't work out quite this way, in part because browsers keep multiple sockets open so that HTTP requests can be made in parallel. (IE uses 2 sockets.) But the effect is the same. If the response to an HTTP GET doesn't include a Content-Length, then the socket gets closed, and a new one will be opened.

Now consider the impact of a hundred browsers running a more complicated web application that periodically polls the server. Are you going to want to keep those connections alive?

The Moral

If you're building a web application that might be deployed in a high network latency situation, consider taking advantage of HTTP/1.1 Keep-Alive. This requires that you build up the HTML that your CGI will emit, and then emit the HTML in one piece, with a Content-Length prepended. Something along the lines of

binmode(STDOUT); $html = ...; print "Content-Type: text/html\r\n", "Content-Length: ", length($html), "\r\n\r\n", $html;
or, if using CGI,
$html = ...; print header(-content_length => length($html)), $html;
will do the trick.

Or, at least do a packet trace so that you can see what's really going on under the covers.

References

Corrections to the any of the above will be appreciated.

Comment on Connection: Keep-Alive and Perl
Select or Download Code
Re: Connection: Keep-Alive and Perl
by shotgunefx (Parson) on May 29, 2001 at 12:07 UTC
    A very interesting post. I never dug deep enough into the RFC to see why this might be a problem. Our servers deal with a lot of small transactions (Usually only one with no additional connections) so it hasn't been a problem.

    For a quick fix, what do you think about a module that would use IO::Scalar or Tie::Handle or something similar to redirect STDOUT to a scalar and output the Content-Length followed by the scalar (presumably containing HTML) on CLOSE?

    -Lee

    "To be civilized is to deny one's nature."
      Seems like this would be good for people skipping quickly through many dynamic pages, but I wonder at what user volume or click speed would you start to feel it.

      To me this is interesting also in that I have been plagued with a browser closing before a long log file was displayed as running output of a C++ process in unix. Thought I had tried keep-alive correctly (as an header in the html page..) and the only answer I could find would be to use a meta refresh which would reload the page periodically.

      So what I have is a perl program called as cgi, which does a lot of processing and calls various C/C++ programs, while it and those process write a detailed log file, and simultaneously the cgi program writes a much more terse description of what is going on to the user's browser. The terseness of the cgi output would cause a browser timeout, and the user would think the C process died and try to launch it again from cgi. (Ouch.)

      This causes problems of course, what if it takes longer to load the page than the meta refresh period? (disappears while you are trying to read it, is what happens).

      I wonder if writing a Content-Length header with a huge number would keep the browser open indefinitely (i.e. the browser logo would keep spinning forever)?

        I wonder if writing a Content-Length header with a huge number would keep the browser open indefinitely (i.e. the browser logo would keep spinning forever)?

        Wouldn't that be the same as not specifying a Content-Length?

        I have been plagued with a browser closing before a long log file was displayed as running output of a C++ process in unix.

        Keeping a browser open indefinitely is a trick that you'll need to get your web server to go along with. Web servers have differing strategies for how they handle what they believe to be "runaway" CGIs.

        One scheme you might try is to fork a separate process that uses HTTP::Daemon to set up a mini web server, then send back a redirect to the browser to point it to the new process. merlyn described this technique in a recent article (which isn't yet on-line, but the listing is here). This bypasses any of the web servers "runaway" detection. From the HTTP::Daemon process, you can dribble out output to your heart's content.

        Oh. You said C++. Never mind.

        Keep-Alive in my understanding is really for keeping the server listening to the connection for a time after the initial transaction (So it doesn't have to spawn a new server for the next connection. Say if it's downloading a bunch of images from a web page.)

        For keeping the browser around a long time I find that you have to keep talking to it. I have a database converter for a client that can take upwards of 30 minutes to run. What I do to keep the connection alive is by disabling buffering and before the long process I output a message like

        "Processing please wait...
        (This will take approximently 30 minutes. Go get a cup of coffee.)

        Then as I process the data, I print a "." or something after every couple of hundred records (like sftp) so they know it is still going (and hopefully they don't get impatient and hit refresh as the process is HUGE and I don't want to start another one for no reason.)

        There is probably a lot of ways to accomplish this. If you are forking I would do something similar and have the parent use sleep to time the printing. Keep in mind this solution is a bad idea if you expect a lot of people to be using these programs as your tying up a lot of resources. For my problem it's fine as only one person runs said program once a day.

        -Lee

        "To be civilized is to deny one's nature."
Chunked encoding permits HTTP keep-alive
by blssu (Pilgrim) on Sep 12, 2002 at 21:13 UTC

    Good article -- I've never done the packet trace, but the difference doesn't surprise me. It's not usually necessary to do everything in memory in order to calculate Content-Length. That's only needed for browsers that don't understand chunked encoding. Chunked encoding also allows headers to be listed at the end -- very similar to PostScript's "at end" headers.

    mod_perl can transparently generate chunked encoding on recent versions of Apache. Doing chunked encoding from plain CGI would be more difficult, but the protocol is fairly simple.

    For more info, see:

Log In?
Username:
Password:

What's my password?
Create A New User
Node Status?
node history
Node Type: perlmeditation [id://83791]
Approved by root
help
Chatterbox?
and the web crawler heard nothing...

How do I use this? | Other CB clients
Other Users?
Others meditating upon the Monastery: (4)
As of 2014-10-31 05:54 GMT
Sections?
Information?
Find Nodes?
Leftovers?
    Voting Booth?

    For retirement, I am banking on:










    Results (214 votes), past polls