http://www.perlmonks.org?node_id=923026


in reply to Re: The Eternal "filter.pl"
in thread The Eternal "filter.pl"

There was a time when I would have set out to do exactly that. But the problem I find with that approach is that it very quickly races past the point of diminishing returns, eventually becoming like J2EE where you have a two line program and 50k of 'configuration'. But it does force me to think about what I do and don't want to abstract away.

I definitely want to abstract the simple "parse to records" logic into something that I can iterate across because the semantics are there and clean in perl already.

But the iteration driver itself can be tricky and I'm not sure it shouldn't just have a few different permutations:

Nah, see that's already messy as hell. Well ok. Maybe there's a reasonable way to make that work; or perhaps it's just not as bad as the description. (Actually I suspect that they're all degenerate cases of the same construct.) I'll have to see once I get everything else out of the way.

Having thought about this for some time now I'm pretty sure I want the actual "filtering function" to be a plain perl function that takes a pair of records. Perl actually does code well and there's no reason for me to abstract everything SO far that I end up having to write a programming language.

Because of that, I'm fighting even with the idea of whether or not a record parsing construct should provide column/datatype information. What good would it REALLY do? The filter function is likely to be very specific to the task at hand, so documentation (in the form of well-named variables, etc) can very effectively be contained therein and operations on the data would require that I re-interpret it. That also sounds a lot like writing a programming language (which I'm sure is fun, but I haven't come up with a good reason to do yet.)

I think I may have thought my way as far as I'm going to think without writing more code.

Me
  • Comment on Thinking out loud (was: Re^2: The Eternal "filter.pl")

Replies are listed 'Best First'.
Re: Thinking out loud (was: Re^2: The Eternal "filter.pl")
by RichardK (Parson) on Aug 30, 2011 at 11:18 UTC

    Hi. I'm glad to see it's not just me that's been thinking along these lines :)

    Not that I've come to any real conclusions, but I think that it's all just set theory. And the way to describe it may be those terms. i.e. sets, unions, intersections and complements.

    If all the records fit in memory in a hash it's reasonably easy to describe a set that passes some test function

    my @set_1 = grep { func($_) } keys %records_1;

    then you can describe the relationship you're looking for in those terms.

    So you might end up with something like this:-

    set1 = set of all records_1 that pass func() set2 = set of all records_2 that fail func() results = intersection of set1 and set2; set3 = ... etc
    and so you can describe any arbitrary combination of sets.

    We will needs some support functions , but that's 'just a simple matter of programming' ;)

    The main problem, as I see it , is how to deal with data sets that are too big to fit into memory. My only thought is to keep a hash of the (key, file offset) and re-parse each record on demand. Or maybe Tie::File could do the job ?

    Arrgh! - you're making me want to try code this again :)

    R.

      Well it won't fit in memory, but that's neither here nor there really. Plus I don't need separate pre-qualifiers for source record sets, which is nice.

      Hmm... closing in on an idea. Gonna go code some tests. I'll transmorgrify this into a CuFP yet!

      Me