Beefy Boxes and Bandwidth Generously Provided by pair Networks
XP is just a number
 
PerlMonks  

HTTP::Response and LWP gzip decompression not very efficient

by bulk88 (Priest)
on Sep 02, 2010 at 12:37 UTC ( #858560=perlquestion: print w/ replies, xml ) Need Help??
bulk88 has asked for the wisdom of the Perl Monks concerning the following question:

I was using nytprof to look at LWP spidering some pages. I noticed that the decode method on the HTTP::Reponse objects LWP returns was taking a ridiculous amount of time. So I thought about any way to fix it. We all know LWP is "blocking", but it does have a socket loop in it. So I decided to put "work" into the content handler callback, like uncompressing the gziped content. I THINK that will use the CPU a bit in the wasted time blocking read of the socket, and just maybe something from the server will buffer in the OS while the ungzip function runs, so ultimately LWP GET+decompression will be the same as LWP GET+no decompression.

I've made some examples to show this pretty bad inefficiency.

There are 2 perl server scripts. One runs at full speed. The other is throttled to a theoretical 16000kbit/s. They are otherwise identical. The scripts are simple HTTP::Daemon servers operating on loopback on port 60181 (sounds like nobody would be using it). The perl server scripts get their test data from a __DATA__ segment at the end of each script thats ~100KB. The HTTP response was sent through Data::Dumper to make a hopefully uncorruptable (copy/paste, line endings, high/control ASCII, unicode, etc) string. The HTTP response headers the servers deliver are canned in the DATA segment. Uncompressed, the gziped webpage is ~320KB. The webpage that is in the __DATA__ segment is a copy of news.google.com from today. Its a good sample page IMHO since it includes alot of embedded data URI jpeg images, incompressible ultimately.

There is 1 lwp test script. It runs 2 loops, 20 cycles each. One loop uses the "proper" way of doing things with the decode method, the other has my callback that block feeds data into Compress::Raw::Zlib. It can be run against either server script.

Here is the command output of the lwp test against full speed server. 3.86x faster. Yikes.
C:\Documents and Settings\Owner\Desktop>perl -d:nytprof lwpcomptest.pl
Using HTTP::Response decode() 1.390625
Using callback with block feeding to Compress::Raw::Zlib 0.367088079452515
C:\Documents and Settings\Owner\Desktop>
Here is the output of running lwp test against the throttled server. 15% faster. Thats no margin of error.
C:\Documents and Settings\Owner\Desktop>perl -d:nytprof lwpcomptest.pl
Using HTTP::Response decode() 14.703125
Using callback with block feeding to Compress::Raw::Zlib 12.5601830482483
C:\Documents and Settings\Owner\Desktop>nytprofhtml
Here is the lwp test file that the servers are tested against.
#!/usr/bin/perl -w #use strict; use LWP::UserAgent; use Compress::Raw::Zlib; use Time::HiRes qw( usleep time ); sub callback { my $output; $z->inflate($_[0], $output) ; $_[1]->add_content($output); } $ua = LWP::UserAgent->new; $t = time; for(0..20){ $rs = $ua->get('http://127.0.0.1:60181', 'Accept-Encoding' => 'gzip'); $rs->decode(); } print 'Using HTTP::Response decode() '.(time-$t)."\n"; $z = new Compress::Raw::Zlib::Inflate( -WindowBits => WANT_GZIP, -Cons +umeInput => 0 ); $t = time; for(0..20) { $rs = $ua->get('http://127.0.0.1:60181', 'Accept-Encoding' => 'gz +ip', ':content_cb' => \&callback); } print 'Using callback with block feeding to Compress::Raw::Zlib '.(tim +e-$t);
In nytprof, against the full speed server
Line     Time on line  Calls 	 	      Code
   Statements              Time in subs
8					
# spent 15.6ms (0s+15.6) within main::callback which was called 532 times, avg 29Ás/call: 
# 532 times (0s+15.6ms) by LWP::Protocol::__ANON__C:/Perl/lib/LWP/Protocol.pm:125 at line 123 of LWP/Protocol.pm, avg 29Ás/call
					sub callback {
9	532	0s			my $output;
10	532	0s	532	0s	$z->inflate($_[0], $output) ;
# spent 0s making 532 calls to Compress::Raw::Zlib::inflateStream::inflate, avg 0s/call
11	532	0s	532	15.6ms	$_1->add_content($output);
# spent 15.6ms making 532 calls to HTTP::Message::add_content, avg 29Ás/call
12					}
........................................................
15	1	0s			for(0..20){
16	21	0s	21	250ms	$rs = $ua->get('http://127.0.0.1:60181', 'Accept-Encoding' => 'gzip');
# spent 250ms making 21 calls to LWP::UserAgent::get, avg 11.9ms/call
17	21	0s	21	422ms	$rs->decode();
# spent 422ms making 21 calls to HTTP::Message::decode, avg 20.1ms/call
18					}
..........................................
22	1	0s			for(0..20)
23					{
24	21	0s	21	156ms	$rs = $ua->get('http://127.0.0.1:60181', 'Accept-Encoding' => 'gzip', ':content_cb' => \&callback);
# spent 156ms making 21 calls to LWP::UserAgent::get, avg 7.44ms/call
25					}
In nytprof, against the the throttled server, why did decode() take LONGER to do the same exact work??????? same page, same data, same server, same number of cycles, maybe I'm not seeing something about how LWP is built under the door. Maybe UserAgent is still downloading data????
Line     Time on line  Calls 	 	      Code
   Statements              Time in subs
8					
# spent 62.5ms (0s+62.5) within main::callback which was called 2099 times, avg 30Ás/call: 
# 2099 times (0s+62.5ms) by LWP::Protocol::__ANON__C:/Perl/lib/LWP/Protocol.pm:125 at line 123 of LWP/Protocol.pm, avg 30Ás/call
					sub callback {
9	2099	0s			my $output;
10	2099	0s	2099	0s	$z->inflate($_[0], $output) ;
# spent 0s making 2099 calls to Compress::Raw::Zlib::inflateStream::inflate, avg 0s/call
11	2099	0s	2099	62.5ms	$_1->add_content($output);
# spent 62.5ms making 2099 calls to HTTP::Message::add_content, avg 30Ás/call
12					}
............................................
15	1	0s			for(0..20){
16	21	0s	21	11.9s	$rs = $ua->get('http://127.0.0.1:60181', 'Accept-Encoding' => 'gzip');
# spent 11.9s making 21 calls to LWP::UserAgent::get, avg 568ms/call
17	21	0s	21	1.28s	$rs->decode();
# spent 1.28s making 21 calls to HTTP::Message::decode, avg 61.0ms/call
18					}
......................................................
22	1	0s			for(0..20)
23					{
24	21	0s	21	11.6s	$rs = $ua->get('http://127.0.0.1:60181', 'Accept-Encoding' => 'gzip', ':content_cb' => \&callback);
# spent 11.6s making 21 calls to LWP::UserAgent::get, avg 554ms/call
25					}
I also dont get, how come with my callback, the UserAgent get method takes SHORTER than WITHOUT my callback. I added code and EXTRA work (decompression) it became shorter? how on earth? Shouldn't it be LWP GET No CB < LWP GET CB < LWP GET No CB + decode?

Since I'm new to perlmonks, I'm not sure if and how I should somehow post the full HTML of nytprof, or just the nytprof.outs, or I shouldn't post them at all since they contain confidential system info (passwords, etc) from Config.pm or something similar, and you all can regenerate your own profiler data.

Here is the full speed server script.

http://pastebin.com/73yWEFd8

Here is the throttled server script.

http://pastebin.com/6yRwsieZ

Comment on HTTP::Response and LWP gzip decompression not very efficient
Download Code

Log In?
Username:
Password:

What's my password?
Create A New User
Node Status?
node history
Node Type: perlquestion [id://858560]
Approved by hok_si_la
help
Chatterbox?
and the web crawler heard nothing...

How do I use this? | Other CB clients
Other Users?
Others making s'mores by the fire in the courtyard of the Monastery: (10)
As of 2014-07-11 02:51 GMT
Sections?
Information?
Find Nodes?
Leftovers?
    Voting Booth?

    When choosing user names for websites, I prefer to use:








    Results (217 votes), past polls