Beefy Boxes and Bandwidth Generously Provided by pair Networks Cowboy Neal with Hat
Your skill will accomplish
what the force of many cannot
 
PerlMonks  

Efficient string tokenization and substitution

by jpfarmer (Pilgrim)
on Jan 12, 2005 at 14:05 UTC ( [id://421711]=perlquestion: print w/replies, xml ) Need Help??

This is an archived low-energy page for bots and other anonmyous visitors. Please sign up if you are a human and want to interact.

jpfarmer has asked for the wisdom of the Perl Monks concerning the following question:

Fellow Monks,

Recently, I was involved in a discussion where I posted a certain Perl algorithm that I've used for quite a while for various things. I was under the impression that it was a reasonably well-designed way to tokenize a string, look up the tokens in a hash, and then replace the tokens with another one when needed. The code I have used is this:

$string =~ s/([^\s.\]\[]+)/{exists($tokens_to_match{lc $1}) ? "$tokens_to_match{lc $1}" : "$1"}/gei;

My reasoning behind this code was as follows: since hash lookup is fairly constant, using a hash lookup would provide good scalability since it avoids repeated regexp compilation. However, it was countered that this solution was not efficient at all. The person replying claimed that the performance of this algorithm is worse than O(n^2) because the way I'm using s/// is inefficient.

I am hoping that some of you can provide guidance on this problem. Is there a better way to approach this problem than my current method?

Replies are listed 'Best First'.
Re: Efficient string tokenization and substitution
by borisz (Canon) on Jan 12, 2005 at 14:56 UTC
    I think your way is _not_ inefficient! I would do it nearly the same way. The modifier i is not needed on the first solution. Also I removed the "" in the re.
    $string =~ s/([^\s\.\]\[]+)/exists($tokens_to_match{lc $1}) ? $tokens_to_match{lc + $1} : $1/ge;
    or ( if you have a lot of tokens this might be faster )
    my $str = join '|', sort { length $b <=> length $a || $a cmp $b } keys %tokens_to_match; $string =~ s/(\b|\s|\.|\[|\])($str)(?>(\b|\s|\.|\[|\]))/ $1 . ( exists($tokens_to_match{lc $2}) ? $tokens_to_match{lc $2} : $2 ) /gei;
    Boris
Re: Efficient string tokenization and substitution
by Aristotle (Chancellor) on Jan 12, 2005 at 23:04 UTC

    I can vaguely imagine where that claim came from, but I don't know enough about the implementation of s/// to make any statements.

    You might want to benchmark against solutions which first tokenize the string, then look up translations for the token, and finally assemble a new string. The two approaches which suggest themselves are splitting the string and iterating over the list, and walking across the string with a regex to collect match offsets and lengths.

    Make sure you benchmark on greately varied sets of data (long and short input strings, long and short tokens, many or few successful translations, lots or little data in the hash; there are a lot of constellations to consider).

    Makeshifts last the longest.

Log In?
Username:
Password:

What's my password?
Create A New User
Domain Nodelet?
Node Status?
node history
Node Type: perlquestion [id://421711]
Approved by Limbic~Region
help
Sections?
Information?
Find Nodes?
Leftovers?
    Notices?
    hippoepoptai's answer Re: how do I set a cookie and redirect was blessed by hippo!
    erzuuliAnonymous Monks are no longer allowed to use Super Search, due to an excessive use of this resource by robots.