Beefy Boxes and Bandwidth Generously Provided by pair Networks
Perl Monk, Perl Meditation
 
PerlMonks  

Re^6: Serializing a large object

by daverave (Scribe)
on Sep 25, 2010 at 21:47 UTC ( #862010=note: print w/ replies, xml ) Need Help??


in reply to Re^5: Serializing a large object
in thread Serializing a large object

You're right, that's a bug. I think replacing the grep with a one distinguishing between two cases solves it. Here is the updated package:

use strict; use warnings; package FastRanges; sub new($$$) { my $class = shift; my $max_length = shift; my $ranges_a = shift; my @lookup; for ( @{$ranges_a} ) { my ( $start, $end ) = @$_; my @idx = $end >= $start ? $start .. $end : ( $start .. $max_length, 1 .. $end ); for my $i (@idx) { $lookup[$i] .= pack 'L', $end } } bless \@lookup, $class; } sub num_ranges_containing($$$) { my $self = shift; my ( $start, $end ) = @_; # query range coordinates return 0 unless ( defined $self->[$start] ) ; # no ranges overlap the start position of the query if ( $end >= $start ) { # query range is simple # any inverted range in {LOOKUP}[$start] must contain it, # and so does any simple range which ends at or after $end return 0 + grep { $_ < $start or $end <= $_ } unpack 'L*', $self->[$start]; } else { # query range is inverted # only inverted ranges in {LOOKUP}[$start] which also end # at of after $end contain it. simple ranges can't contain # the query range return 0 + grep { $_ < $start and $end <= $_ } unpack 'L*', $self->[$start]; } } 1;
Perhaps this could be done a bit more efficiently, I'm not sure.

Thanks for noting the bug, BrowserUK! I'm sure you saved me a lot of future frustration.

And if you indeed think it now works and also pretty much optimized, I'd be happy to get some ideas for the original question.


Comment on Re^6: Serializing a large object
Download Code
Re^7: Serializing a large object
by BrowserUk (Pope) on Sep 27, 2010 at 16:58 UTC

    I've spent a couple of days thinking about this and playing with alternative solutions, but nothing comes close to being a fast as what you're doing now.

    The problem of course, is that what you're doing now trades truly huge amounts of space for its speed. And so we come back to the problem of how to avoid having to either:

    • Save and load the huge lookup table quickly.
    • Or, generate it from the more compact representation more quickly.

    And given the huge limits you are working with--30,000 X ranges from 0 .. 15,000,000--I don't see a solution. The presence of "inverted" ranges, in concert with the 15e6 overall range, defy all my attempts at a more compact representation. And I don't believe you'll beat Storable easily either.

    In short. I have nothing to offer your original question beyond one possibility.

    There is a data structure, QuadTree that might come close to achieving the performance of your current solution, but use far less memory. The problem is that writing such a solution in terms of Perl's existing data structures to achieve a pure perl solution isn't effective.

    You would need to obtain a C-library and write a wrapper for it. And there is no guarantee that the solution would perform as well.


    Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
    "Science is about questioning the status quo. Questioning authority".
    In the absence of evidence, opinion is indistinguishable from prejudice.
      Thank you BrowserUk. I greatly appreciate your kind attention.

      I guess I will stick with the original, space-consuming implementation. Does it makes sense to compress the store files? If so, can you think of any particular module/method that is most appropriate for compressing this kind of data (I usually use IO::Compress::Gzip but I know there are other options too.

      Thanks again for your help,
      Dave.

        Does it makes sense to compress the store files?

        Yes & no. :(

        • Yes.

          I generated a random set of 3,000 overlaps--positive & negative--with a maximum range of 10,000.

          The nstore'd file on disk was: 26/09/2010  15:26        60,783,878 fred.bin.

          gzipping that resulted in:     26/09/2010  15:26           423,984 fred.bin.gz.

          It'll certainly save you large amounts of disk space. But that's not your aim.

        • No.

          The problem is that whilst you save time reading from disk. You send time decompressing. And in the end, much of the time spent retrieve()ing the data, is the time required to allocate the memory and reconstruct it.

        It would certainly be worth you investigating the idea with your real-world datasets; and it will absolutely save huge amounts of disk space. But whether it will actually load faster will depend on many factors; you'll have to try for yourself with real data.


        Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
        "Science is about questioning the status quo. Questioning authority".
        In the absence of evidence, opinion is indistinguishable from prejudice.

Log In?
Username:
Password:

What's my password?
Create A New User
Node Status?
node history
Node Type: note [id://862010]
help
Chatterbox?
and the web crawler heard nothing...

How do I use this? | Other CB clients
Other Users?
Others chanting in the Monastery: (7)
As of 2014-07-12 04:12 GMT
Sections?
Information?
Find Nodes?
Leftovers?
    Voting Booth?

    When choosing user names for websites, I prefer to use:








    Results (238 votes), past polls