|laziness, impatience, and hubris|
A seemingly very simple question which I have searched pretty hard already, honest. I'm writing proteomics code in Perl and spend a lot of time scanning down long protein strings, like
These can be 50 to 5000 chars long. I have heaps of code like
Profiling the code with Devel::FastProf I find that lots of time goes in the "for" loop and the "substr" call, not unexpectedly. Now, I'm pretty efficient with general coding and string handling, and have optimised piles of regexp matches after trying variants under FastProf, etc etc but to my surprise I can't find an efficient special "walk down a string" technique. Other languages have "iterators" which understand special cases like a plain walk along a string and give you an optimised interface, ie they remember where you are in the string and can give you the next char with basically one op.
I can split the string into an array with @array = split( //, $protein) and use array indexing, but this seems to only gain 20% in the best test case, and about nothing in my real code, ie the cost of splitting nullifies the gain. The protein strings are re-used about 20 times, so the split cost is amortised a little, but the gain is small. To give you a tiny flavour of the data sizes: 40,000 proteins, average lnegth 400, each protein scanned 20 times.
Ok enough guff, any suggestions appreciated.
Thanks, Greg E