|
|
| Perl-Sensitive Sunglasses | |
| PerlMonks |
Re: Efficient string tokenization and substitutionby Aristotle (Chancellor) |
| on Jan 12, 2005 at 23:04 UTC ( [id://421867]=note: print w/replies, xml ) | Need Help?? |
This is an archived low-energy page for bots and other anonmyous visitors. Please sign up if you are a human and want to interact.
I can vaguely imagine where that claim came from, but I don't know enough about the implementation of s/// to make any statements. You might want to benchmark against solutions which first tokenize the string, then look up translations for the token, and finally assemble a new string. The two approaches which suggest themselves are splitting the string and iterating over the list, and walking across the string with a regex to collect match offsets and lengths. Make sure you benchmark on greately varied sets of data (long and short input strings, long and short tokens, many or few successful translations, lots or little data in the hash; there are a lot of constellations to consider). Makeshifts last the longest.
In Section
Seekers of Perl Wisdom
|
|
||||||||||||||||||||||||