Beefy Boxes and Bandwidth Generously Provided by pair Networks
Think about Loose Coupling
 
PerlMonks  

Re^4: Random data generation.

by BrowserUk (Patriarch)
on Jun 27, 2010 at 18:59 UTC ( [id://846799]=note: print w/replies, xml ) Need Help??


in reply to Re^3: Random data generation.
in thread Random data generation.

I wonder if you've overlooked my solution, or if there's anything wrong with it.

Indeed I did, as it was posted as a reply to someone else. And no, there is nothing wrong with it.

It beats out salva's (just), and is a simple, clean, and direct implementation of the requirements. I have my winner. Thank you.

Now I've seen it, I'm somewhat chagrined that I didn't see it for myself. It would still be my winner even if it were slightly slower than the flags & resets versions.

Yours is xA below:

[19:48:48.65] c:\test>junk -I=3 ABCDEF 12 Rate x0 x3 x9 x8 x5 x4 x6 x7 xA x0 9794/s -- -8% -13% -19% -35% -36% -38% -43% -51% x3 10606/s 8% -- -6% -12% -30% -31% -33% -39% -46% x9 11286/s 15% 6% -- -6% -25% -26% -29% -35% -43% x8 12063/s 23% 14% 7% -- -20% -21% -24% -30% -39% x5 15122/s 54% 43% 34% 25% -- -1% -5% -12% -24% x4 15340/s 57% 45% 36% 27% 1% -- -3% -11% -23% x6 15863/s 62% 50% 41% 32% 5% 3% -- -8% -20% x7 17248/s 76% 63% 53% 43% 14% 12% 9% -- -13% xA 19822/s 102% 87% 76% 64% 31% 29% 25% 15% -- [19:49:50.63] c:\test>junk -I=3 ABCDEF 120 Rate x0 x3 x4 x8 x9 x6 x5 x7 xA x0 275/s -- -20% -21% -80% -80% -86% -86% -91% -92% x3 341/s 24% -- -2% -75% -75% -82% -83% -89% -90% x4 349/s 27% 2% -- -75% -75% -82% -82% -89% -90% x8 1383/s 404% 305% 296% -- -0% -29% -30% -57% -59% x9 1386/s 405% 306% 297% 0% -- -29% -30% -57% -59% x6 1945/s 608% 470% 457% 41% 40% -- -1% -40% -42% x5 1971/s 618% 477% 465% 43% 42% 1% -- -39% -41% x7 3216/s 1071% 842% 821% 133% 132% 65% 63% -- -4% xA 3346/s 1119% 880% 859% 142% 141% 72% 70% 4% -- [19:50:43.49] c:\test>junk -I=3 ABCDEF 1200 Rate x4 x0 x3 x8 x9 x6 x5 x7 +xA x4 3.33/s -- -0% -8% -98% -98% -98% -98% -99% -9 +9% x0 3.33/s 0% -- -8% -98% -98% -98% -98% -99% -9 +9% x3 3.61/s 8% 8% -- -97% -98% -98% -98% -99% -9 +9% x8 137/s 4019% 4019% 3700% -- -8% -25% -27% -61% -6 +3% x9 149/s 4366% 4366% 4020% 8% -- -19% -21% -58% -6 +0% x6 183/s 5392% 5392% 4966% 33% 23% -- -3% -48% -5 +0% x5 189/s 5578% 5578% 5138% 38% 27% 3% -- -46% -4 +9% x7 351/s 10434% 10434% 9618% 156% 136% 92% 86% -- - +5% xA 368/s 10960% 10960% 10104% 169% 148% 101% 95% 5% +--

Replies are listed 'Best First'.
Re^5: Random data generation.
by salva (Canon) on Jun 28, 2010 at 10:16 UTC
    The length of the generated data is a parameter that does not modify the ranking because the costs of all the given solutions are linear to it.

    If you change the set size or the maximun number of consecutive repetitions instead, then the ranking is not so stable. The benchmarks below show it.

    I have also added a new solution salva2 (much faster in average) and some variations of almut's one.

    The code:

    And the benchmarking results:

    Note that the salva2 algorithm has some bias. It is able to generate all the valid solutions but not all of them have the same probability.

      Actually, the number of reps allowed is the one parameter that doesn't vary for my scenario. Also, the length of the string will never be less than the size of the set.

      The approach you use in salva2 is very interesting, and does make it come out first in every benchmark I've run. One of my own alternatives used a similar approach: Generate a big string completely at random and then pick out a compliant N-char length lump with a regex and return it. Your innovations of a) making the whole string compliant using a regex, b) retaining the buffer across calls; combine to make the approach viable.

      The funny thing is that I'm already effectively using the latter part in my code (but not the benchmarks), in as much as I need a bunch of short strings, but I'm calling the generator for one long string and then chopping it into bits using unpack "(A$N)*', gen( $N * 100, @set ). That negates a part of salva2's advantage, and also trades the cost of unpack against the cost of multiple calls to the generator.

      Now I have to consider whether your generate a big string and the regex it into compliance approach would still be quicker in-situ. It's not an easy thing to benchmark. Neither is considering the affect it has on the randomness of the result.

      It's quite amazing how many variations can result from an ostensibly simple spec. Many thanks for your input.


      Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
      "Science is about questioning the status quo. Questioning authority".
      In the absence of evidence, opinion is indistinguishable from prejudice.
        Neither is considering the affect it has on the randomness of the result

        Well, actually it is!

        Given the $size of the test, the probability of salva2 generating a string starting by two equal characters is 1/($size+1) instead of the 1/$size in the other methods.

        The funny thing is that salva2 is unbiased, they are the other methods that are actually biased!!!

        The following code counts the number of repetitions at any given position in the generated strings:

        @set = (1..3); $max_reps = 2; $len = 10; my @acu = ([], []); my @gen = (\&gen_salva, \&gen_salva2); my ($total, $ds) = (0,0); my $n = 10000000; for (1..$n) { for (0, 1) { my $out = $gen[$_]->(); while ($out =~ /(.)\1/g) { $acu[$_]->[pos($out) - 2]++; } } } use Data::Dumper; print Dumper \@acu;
        And this is the result:
        salva => [ 3333044, 2222094, 2591285, 2469706, 2509595, 2496496, 2501 +039, 2501529, 2497584 ] salva2 => [ 2500639, 2498220, 2499044, 2501878, 2500880, 2500962, 2498 +685, 2499675, 2508077 ]

        So, with salva2 the probability of finding a repetition at any position is 1/($size+1) while with the rest of methods the probability varies between the maximum 1/$size (at position 0) and the minimum ($size-1)/$size**2 (at position 1).

      I think there might be a little bug in salva2. I copied the code from Re^5: Random data generation to try my own solution (middle of the pack, not worth posting), but while studying the code, I noticed a potential problem. my $buf is declared just before gen_salva2.

      my $buf; sub gen_salva2 { while (length $buf < $len) { $buf .= $set[rand @set] for (1..$len*100); $buf =~ s/(.)\1{$max_reps,}/$1 x $max_reps/ge; } substr($buf, -$len, $len, ''); }

      So I put it inside sub gen_salva2{} and ran it again. I bencharked only for $elements = 6, $max_reps = 2, to match BrowserUK's original parameters. That and the change to gen_salva2 are the only changes. Notice how the rankings change.

      elements: 6 max_reps: 2 Rate salva2 salva almut_mod3 almut_mod4 almut_mod2 + almut salva2 1077/s -- -98% -98% -98% -98% + -98% salva 54086/s 4922% -- -10% -13% -13% + -17% almut_mod3 59766/s 5450% 11% -- -4% -4% + -8% almut_mod4 61953/s 5653% 15% 4% -- -1% + -5% almut_mod2 62270/s 5682% 15% 4% 1% -- + -5% almut 65222/s 5956% 21% 9% 5% 5% + --

      I thought gen_salva2 got its amazing speed boost by using an already populated $buf. After running the benchmark, I reduced the count to 100 and added a print line before creating $buf to test this theory.

      my $buf; sub gen_salva2 { print "$buf\n"; while (length $buf < $len) { $buf .= $set[rand @set] for (1..$len*100); $buf =~ s/(.)\1{$max_reps,}/$1 x $max_reps/ge; } substr($buf, -$len, $len, ''); }

      This printed a string 1164 chars long: 4235621656212414423622661645561656223662351124233115.... Each time through, it reduces the length by 12, which is then printed. When the string is too small, it generates a new one. The while() loop is simply not executed very often -- about 1 in 96 to 98 times on average.

      --marmot
        It is not a bug, gen_salva2 is designed to work that way!

        $buf is populated by gen_salva2 but instead of generating data for just one output string it generates data for 100 of them (a few less, actually). Next calls to gen_salva2 do not need to fill $buf again as it already contains data (until exhausted, then it is refilled).

        That's also the reason my previous comment said it was faster on average!

Log In?
Username:
Password:

What's my password?
Create A New User
Domain Nodelet?
Node Status?
node history
Node Type: note [id://846799]
help
Chatterbox?
and the web crawler heard nothing...

How do I use this?Last hourOther CB clients
Other Users?
Others cooling their heels in the Monastery: (8)
As of 2024-04-23 17:40 GMT
Sections?
Information?
Find Nodes?
Leftovers?
    Voting Booth?

    No recent polls found