Beefy Boxes and Bandwidth Generously Provided by pair Networks
Syntactic Confectionery Delight
 
PerlMonks  

Re^3: Faster regex to split a string into runs of similar characters?

by Eily (Monsignor)
on Nov 21, 2016 at 14:50 UTC ( [id://1176252]=note: print w/replies, xml ) Need Help??


in reply to Re^2: Faster regex to split a string into runs of similar characters?
in thread Faster regex to split a string into runs of similar characters?

Well if instead of the length of the strings you save the pos, getting the information from the first string is straightforward (and O(1) if you only need one character). It removes the need to copy the substrings altogether, since you can access them directly in the input string. It does look like you still get a significant gain when copying all the substrings into the array:

cmpthese -1,{ a_copy=>q[ @array = (); push @array, "$1" while $s =~ m[((.)\2*)]sg; ] , a_pos=>q[ @array = (); push @array, pos() while $s =~ m[(.)\1*]sg; ] , b_cow=>q[ # There might be a COW mechanism because of the call to +substr @array = (); $s3 = " $s" ^ $s;; push @array, substr($s, pos(), length + $1) while $s3 =~ /(.\o{0}*)/gs ], b_copy=>q[ # Force copy, to avoid delayed penalty of COW @array = (); $s3 = " $s" ^ $s;; push @array, "".substr($s, pos(), l +ength $1) while $s3 =~ /(.\o{0}*)/gs ], b_pos=>q[ @array = (); $s3 = " $s" ^ $s;; push @array, pos() while $s3 =~ /.\o{ +0}*/gs ], };; __DATA__ Rate a_copy a_pos b_copy b_cow b_pos a_copy 383/s -- -20% -49% -61% -80% a_pos 478/s 25% -- -36% -51% -75% b_copy 747/s 95% 56% -- -23% -60% b_cow 971/s 154% 103% 30% -- -49% b_pos 1888/s 393% 295% 153% 94% --

  • Comment on Re^3: Faster regex to split a string into runs of similar characters?
  • Download Code

Replies are listed 'Best First'.
Re^4: Faster regex to split a string into runs of similar characters?
by BrowserUk (Patriarch) on Nov 21, 2016 at 23:09 UTC

    Sorry for the delay in getting back to you. I've adapted your technique somewhat:

    sub eily{ my $str = shift; my @b = map[ ( 1e99, 0 ) x 2 ], 1 .. 256; for my $y ( 0 .. $HEIGHT-1 ) { my $s = \substr( $$str, $y * $WIDTH, $WIDTH ); my $t = ' ' . $$s ^ $$s; chop $t; while( $t =~ m[[^\0]\0*]g ) { my $c = ord substr $$s, $-[0], 1; #, $-[0], $+[0]; $b[ $c ][ LEFT ] = $-[0] if $-[0] < $b[ $c ][ LEFT + ]; $b[ $c ][ RIGHT ] = $+[0]-1 if $+[0]-1 > $b[ $c ][ RIGH +T ]; $b[ $c ][ TOP ] = $y if $y < $b[ $c ][ TOP + ]; $b[ $c ][ BOTTOM ] = $y if $y > $b[ $c ][ BOTT +OM ]; } } return \@b; }

    and the results are impressive:

    C:\test>\perl22\bin\perl 1176081.pl -WIDTH=10000 -HEIGHT=10000 yr() took 12.478694 buk3() took 8.195635 dave() took 1.876719 eily took 0.888511 C:\test>\perl22\bin\perl 1176081.pl -WIDTH=20000 -HEIGHT=20000 yr() took 18.774189 buk3() took 32.063918 dave() took 5.701784 eily took 2.409520

    With the rise and rise of 'Social' network sites: 'Computers are making people easier to use everyday'
    Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
    "Science is about questioning the status quo. Questioning authority". The enemy of (IT) success is complexity.
    In the absence of evidence, opinion is indistinguishable from prejudice.

      I'm not getting any speed gain with this method, for 'real' image. Not sure, why to expect this gain, if instead of matching for sequence of non-zero bytes we just trying to match for non-zero byte followed by sequence of zeroes? And there's additional work to do: xor-ing, chopping, etc.

      Plus, what if object "32" happens to be in 1st column?

      $s = join '', map { chr } qw/ 32 0 1 1 0 0 2/; $t = ' ' . $s ^ $s; chop $t; while ( $t =~ m[[^\0]\0*]g ) { printf "%d\t%d\t%d\n", ord( substr $s, $-[0], 1 ), $-[0], $+[0] - +1; }

      It looks we have to prepend a zero byte (because, it seems, original eily's solution was strictly for alphabetical strings). Then speed drops below that of "buk3" - loop spends lot of time finding bbox of background. Inserting "next unless $c", leads to, again, the same performance as "buk3".

      Well, for now, "buk3"'s speed is "good enough" (amazingly good compared to 'elegant' PDL-only solution), thank you everyone who contributed.

        Not sure, why to expect this gain, if instead of matching for sequence of non-zero bytes we just trying to match for non-zero byte followed by sequence of zeroes?

        Mostly because it avoids the need for capturing parens (required for the back reference and needed if objects can abut rather than being separated by 0 bytes as you described). My crude image generator doesn't guarantee at least one zero byte between objects.

        That is probably the difference between your profiling on your real images and mine against my simulated ones.

        And there's additional work to do: xor-ing, chopping, etc.

        xoring strings is a very fast operation; it forms the basis of many of my own faster algorithms operating on strings. chop simply decrements a single internal integer.

        Plus, what if object "32" happens to be in 1st column?

        Good call. For your data, using a null byte instead of a space would be important.

        Well, for now, "buk3"'s speed is "good enough" (amazingly good compared to 'elegant' PDL-only solution)

        In the end, you choose what works for your application. For me, this took on a life of its own because it is so similar to several applications I've written in the past; and will undoubtedly need to revisit in the future. I've made something of a career out of finding ways to speed up pure perl solutions to problems over the last few years, and each time something like this comes up here on PM, I like to throw the guts of the problem out to room, because it invariably throws up new approaches and new twists on old ones, that become useful down the line to myself and others alike.

        The basis of the gain of the buk3 approach (over your original) is simply to allow the regex engine to skip over runs of similar characters at each iteration, rather than discovering them one byte at a time. But the need to use a back reference to make that happen -- in my slight redefinition of the problem where null bytes between objects isn't guaranteed -- forces the use of capturing brackets which imposes a significant cost.

        eily's approach uses the fast xor operation to transmogrify the data such that (even without the null bytes) no back references or capturing parens are required.


        With the rise and rise of 'Social' network sites: 'Computers are making people easier to use everyday'
        Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
        "Science is about questioning the status quo. Questioning authority". The enemy of (IT) success is complexity.
        In the absence of evidence, opinion is indistinguishable from prejudice.

Log In?
Username:
Password:

What's my password?
Create A New User
Domain Nodelet?
Node Status?
node history
Node Type: note [id://1176252]
help
Chatterbox?
and the web crawler heard nothing...

How do I use this?Last hourOther CB clients
Other Users?
Others taking refuge in the Monastery: (3)
As of 2024-04-19 21:33 GMT
Sections?
Information?
Find Nodes?
Leftovers?
    Voting Booth?

    No recent polls found