Come for the quick hacks, stay for the epiphanies. | |
PerlMonks |
Re^18: [OT] The interesting problem of comparing (long) bit-strings.by salva (Canon) |
on Apr 03, 2015 at 09:06 UTC ( [id://1122341]=note: print w/replies, xml ) | Need Help?? |
Answers to questions:
oiskuu says: re: bitstrstr. That looks like Horspool. (Most B-M simplifications have dropped the "good-suffix" shift and kept the "bad-character(s)" shift). Yes, it is actually Boyer-Moore-Horspool. I still have to come with a way to implement the good-suffix tables without incurring into a 32*m (or 64*m) memory usage which I consider unacceptable. Using the delta compression would reduce it to 8*m. Maybe it can be done at the byte level and then it would come to 1*m... the thing is that I like the O(1) memory consumption of the B-M-H. Where does the delta compression idea come from? It is my own. Trying to keep all the delta information in the cache. Currently, on the GitHub repo there are three variants of the algorithm: the "master" branch that tries to work at byte boundaries when looking for the bad-character shift; the "simplify" branch, that works at bit level and the "caching" one (implementing the delta compression) that tries to be cache friendly. I still don't know if there will be any effect in performance. I think there would be edge cases where having a precisse delta would help but I don't know if those are likely to appear on real-life data. The same happens when deciding if running the bad-character test at byte boundaries first or just working at the bit level. The former removes a memory load (very likely from the processor cache), and a shift operation. I was considering another variation: working at the byte level, and then performing 8 parallel bitstring comparisons when delta < 8 bits (or even working with uint16_t units, and performing 16 comparisons in parallel). BrowserUk says: did you start with a (simple byte-wise) Boyer-Moore implementation cribbed from somewhere? No, I started from scratch. BrowserUk says: I'm confused as to the difference between needle_offset & needle_prefix? needle_prefix is just a hack for testing byte-unaligned needles. BrowserUk says: If you have a brief explanation of the values in the delta table it might help. I tried looking at them in hex & binary to understand their purpose; but nothing leaps off the page at me. It is (mostly) an exponential succession used to reduce the size of the B-M-H delta table to something that fits into the L1 cache. The script used to generate it is also in the repository. The jump table contains indexes (uint8_t) into the delta table (uint32_t). That way, for a 14bits window size, the function uses 1*(1<<14) + 256*4 bytes = 17KB of working memory that fits in the 32KB L1 cache of current x86/x86_64 CPUs.
In Section
Seekers of Perl Wisdom
|
|