http://www.perlmonks.org?node_id=850366


in reply to web performance 2010

So, dear monks, what tricks, tips, or links do you use to squeeze the most out of your perl web apps ?

Profiling, to find bottlenecks. And caching, caching, caching. Did I mention memcached?

Perl 6 - links to (nearly) everything that is Perl 6.

Replies are listed 'Best First'.
Re^2: web performance 2010
by afoken (Chancellor) on Jul 20, 2010 at 14:09 UTC
    Profiling, to find bottlenecks.

    I can only second that. About three weeks ago, I had a factor 30 (3000%!) speedup after some litte benchmarking with Devel::NYTProf and nytprofhtml. One of the first actions after removing some of my really stupid code(1)was to get rid of XML::Twig and use XML::LibXML instead.

    Alexander

    (1) Don't code when your brain is in power save mode: Traversing a large tree to find the root node at least once for every node (900.000 times per script run) doesn't make your code faster. Using the root node that is already stored in $self does. ;-)

    --
    Today I will gladly share my knowledge and experience, for there are no sweeter words than "I told you so". ;-)

      yes, replacing incorrect use of one module by correct use of another can do wonders to the speed of your scripts. Irrespective to the relative efficiency of the modules in question.

      Jenda
      Enoch was right!
      Enjoy the last years of Rome.

        It seems you have misunderstood my posting, perhaps it was a little bit short on information.

        I did not run NYTProf once or twice to see that factor 30. I ran NYTProf after every little change, to see if that change was accelerating my script or not, and to find the next hot spot in the code. Some changes had little or no effect, some even slowed down. Most changes gained only some percents. The two big winners were the traversing code and XML::Twig.

        Removing the traversing code already accelerated my script. Some smaller steps followed, but then the next problem became quite obvious: XML::Twig burned a lot of time when creating new elements and attributes. I replaced XML::Twig with XML::LibXML instead of hacking XML::Twig simply because it was easier than creating a private branch of XML::Twig fine-tuned for my special case. It was a reversable test, because I had the latest version of the script using XML::Twig in SVN. If XML::LibXML had been as "slow" as XML::Twig, I could have simply issued a "svn revert" and could have continued by optimizing a private branch of XML::Twig. This replacement was mostly a quite simple search-and-replace operation for the different class and method names, but it gave an instant performance boost of several hundred percent. So I commited that version and never thought about going back to XML::Twig. The one thing I lost was the really useful "csv" indent style available in XML::Twig output, but I added that back later using some perl code. After replacing XML::Twig with XML::LibXML, I found and reworked some other routines wasting some time, but compared to the tree traversing code and XML::Twig, they were nearly insignificant. Now, the script is sufficently fast and spends most time in code that I can not optimize further (Perl opcodes and libxml xsubs).

        Alexander

        --
        Today I will gladly share my knowledge and experience, for there are no sweeter words than "I told you so". ;-)
Re^2: web performance 2010
by cutlass2006 (Pilgrim) on Jul 20, 2010 at 07:32 UTC

    any experience with using memoize module ? I have always been leary of using this module (not because of quality) in production e.g. I would prefer to 'hand-roll' caching.

      You can't get much better than Memoize with hand-rolled caching, and using Memoize is a one-line change to your code - so there is very little cost to implementing it, supposing that you have a benchmark/profile to target.