http://www.perlmonks.org?node_id=440815


in reply to Re^3: Challenge: Another Infinite Lazy List
in thread Challenge: Another Infinite Lazy List

You're right. I misread your original post. What threw me off was the mention of streams. In the solution you present (which is indeed entirely different from merging lists), using lazy lists seems forced to me, since their properties don't get you anything that you couldn't get very easily and directly with grep and the '..' operator:

use strict; # use Stream; use Math::Pari qw(gcd); # We can test all the factors at once using the greatest common diviso +r algorithm. my @factors = @ARGV; my $test_number = 1; $test_number *= $_ for @factors; die "Must provide a list of non-zero factors\n" unless $test_number > +1; sub has_common_factor { gcd($_[0], $test_number) != 1 } # my $integers = Stream::tabulate( sub { $_[0] }, 1); # my $challenge = $integers->filter(\&has_common_factor); # $challenge->show(50); print "@{[grep has_common_factor($_), 1..50]}\n";
Thanks for pointing out the Streams module. It does some very nice things which I think I should incorporate in my stab at implementing lazy lists. It doesn't use memoization, however, which I think is absolutely essential to a workable implementation of lazy lists, at least in Perl5. What's most curious about this is that Dominus is the author of the Memoize module! Then again, the module is not available from CPAN. I suspect that both these facts have to do with realizing that this sort of object requires a little more syntactic support from Perl to be worth doing.


Update: Oh, yes. There's memoization in Stream.pm, all right; it's just done very discreetly (and elegantly, IMO):

sub tail { my $t = $_[0]{t}; if (ref $t eq CODE) { # It is a promise $_[0]{t} = &$t; # <--- there! } $_[0]{t}; }


BTW the execution time of my merged-lists solution is not growing anywhere nearly as fast as the formula you gave (not that it matters a whole lot, but I give the stats below if anyone's interested). Still, the gcd-based solution is noticeably faster; whether this is the result of the fact that most of its heavy lifting (the gcd calculation) is done by a C routine, or something more fundamental is not yet clear to me.

the lowliest monk

OK, here are the stats for the merged-lists solution. All the runs were performed with arguments consisting of a prefix of the list primes from 2 to 37, with N=2000 throughout. The formula mentioned below is the one given by tall man here.
R: number of primes in the input c1: number of times merge was called c2: number of times a < comparison was executed in merge c3: time factor predicted by formula r1: c1/c3 r2: c2/c3 R c1 c2 c3 r1 r2 2 2002 3002 1667 1.201 1.801 3 3276 5095 4133 0.793 1.233 4 4227 6821 7057 0.599 0.967 5 4989 8203 10137 0.492 0.809 6 5673 9534 13440 0.422 0.709 7 6303 10744 16834 0.374 0.638 8 6921 11966 20377 0.340 0.587 9 7499 13098 23983 0.313 0.546 10 8015 14108 27602 0.290 0.511 11 8538 15149 31314 0.273 0.484 12 9025 16107 35040 0.258 0.460