|
|
|
Your skill will accomplish what the force of many cannot |
|
| PerlMonks |
Efficient string tokenization and substitutionby jpfarmer (Pilgrim) |
| on Jan 12, 2005 at 14:05 UTC ( [id://421711]=perlquestion: print w/replies, xml ) | Need Help?? |
This is an archived low-energy page for bots and other anonmyous visitors. Please sign up if you are a human and want to interact.jpfarmer has asked for the wisdom of the Perl Monks concerning the following question: Fellow Monks, Recently, I was involved in a discussion where I posted a certain Perl algorithm that I've used for quite a while for various things. I was under the impression that it was a reasonably well-designed way to tokenize a string, look up the tokens in a hash, and then replace the tokens with another one when needed. The code I have used is this: $string =~ s/([^\s.\]\[]+)/{exists($tokens_to_match{lc $1}) ? "$tokens_to_match{lc $1}" : "$1"}/gei;My reasoning behind this code was as follows: since hash lookup is fairly constant, using a hash lookup would provide good scalability since it avoids repeated regexp compilation. However, it was countered that this solution was not efficient at all. The person replying claimed that the performance of this algorithm is worse than O(n^2) because the way I'm using s/// is inefficient. I am hoping that some of you can provide guidance on this problem. Is there a better way to approach this problem than my current method?
Back to
Seekers of Perl Wisdom
|
|
||||||||||||||||||||||||||||||