Beefy Boxes and Bandwidth Generously Provided by pair Networks
"be consistent"

Re: global source filters

by Beatnik (Parson)
on Jun 22, 2003 at 20:52 UTC ( #267987=note: print w/replies, xml ) Need Help??

in reply to global source filters

Using source filters to speed things up is not a good idea. Source filters are slow by definition. You CAN use it just fine for stuff you're doing... you could also just overload operators, use ties or use a plain debugger ;)

Just my € 0.02

... I'm belgian but I don't play one on TV.

Replies are listed 'Best First'.
Re: Re: global source filters
by tilly (Archbishop) on Jun 23, 2003 at 06:41 UTC
    Recommending overload and tie in Perl for performance reasons is...a novel concept.
Re: Re: global source filters
by steves (Curate) on Jun 23, 2003 at 01:23 UTC

    I wondered about this. My theory was this: source filters would slow the compilation phase, but that should be offset by greater run time savings. The tools in question are mostly large batch data manipulation packages. Some read hundreds of thousands or millions of rows from databases and other sources and then feed them to a set of filters and outputs. We've build a nice framework around this. The tradeoff is that we can plug in a new data transform very fast, but run-time performance is sometimes poor. When I profile the code, there are several areas where I see potential savings -- nothing native to base Perl -- it's all layers we built on top to make coding easier/faster. Among those areas are interfaces for debug and verbose messages, and a few other assert type checkers.

    But here's the thing: When I profile Carp::Assert I seem to get mixed results. I see the assert calls disappear from the profile when I set PERL_NDEBUG but run-time is sometimes about even or a bit slower. But I'm not sure if this is due to using too small of a sample set. I've found that profiling the million row extracts takes too long so I usually start with 1000 or 10,000. I've considered what you said but the source for Carp::Assert looks like it's just replacing the functions with no-ops's -- not filtering. That should be relatively fast so I haven't figured this one out yet. But that leads to a meta-question: What is the best way to do this? Maybe filtering isn't the right way ... The basic problem is you have many existing lines of code with known patterns you'd like to remove from the code for performance. Also, unlike Carp::Assert some of these are class/object methods inherited from a base class. Not sure if that matters or not for a no-op replacement approach which I'm considering now ... The way Carp::Assert appears to do it looks like it uses contant functions which would then be inlined. Need to examine that code some more ...

    Update:Here's what Carp::Assert does. It's import method looks at the environment. If one of the NDEBUG indicators is set, it exports a DEBUG constant function that evaluates to zero and sets the debugging methods for the calling package to no-op's. If debugging is not disabled, it exports a DEBUG that evaluates to one and sets the debugging methods to the real thing. So no code is removed. Instead, it's relying on that DEBUG constant to stop any call or argument evaluation from being performed. That's why it relies on the if DEBUG: Just no-op'ing the methods would mean arguments would get evaluated.

Log In?

What's my password?
Create A New User
Domain Nodelet?
Node Status?
node history
Node Type: note [id://267987]
and the web crawler heard nothing...

How do I use this? | Other CB clients
Other Users?
Others musing on the Monastery: (3)
As of 2022-05-23 23:50 GMT
Find Nodes?
    Voting Booth?
    Do you prefer to work remotely?

    Results (82 votes). Check out past polls.