in reply to Re^3: [OT] The interesting problem of comparing (long) bit-strings.
in thread [OT] The interesting problem of comparing bit-strings.
As for brute-force. The "Boyer-Moore beating assembler versions that utilize microcoded string search instructions" is a decades-old lesson
For string search, decades ago (before cpus acquired high-speed, on-chip caches) Boyer Moore was the business, but since the 486 circa. 1990, on-chip caches have tipped the balance again.
"Decades old lessons" were relevant, decades ago.
If you won't take my words for it; and aren't equipped to try it for yourself, take Bjarne Stroustrup's word for it. View http://channel9.msdn.com/Events/GoingNative/GoingNative-2012/Keynote-Bjarne-Stroustrup-Cpp11-Style and skip ahead to timecode 44:40.
A few seconds in he presents a question (on a slide titled "Vector vs. Linked List"), that goes like this:
The task comes in two parts:
And the question is: For which N does it become faster to use a linked list than a flat array?
The decades old wisdom on this is that insertions/deletions into/from an array are O(N2), but for a linked list they are O(N log N) (with slightly higher constant factors), so N only need grow to a couple of hundred or a 1000 or so for the linked-list to start outperforming the flat array. Right? Wrong!
Watch for a few more seconds (and wonder in amazement at his non-existent graph) as he shows that the answer is "Never". At no point does the linked list become faster than the flat array. In fact the bigger N gets, the bigger the gap between the two becomes, in favour of the flat array.
As counter-intuitive as it may be, the benefits of the caching on the sequential accesses made by the flat array, far outweigh the costs of moving more data. The non-sequential, scattered around memory nature of linked-lists means they never catch up, let alone surpass the performance of the cache-friendly contiguous memory accesses of the flat array.
Times change; and a lot of the "knowledge" CS students acquired, despite being tortured within an inch of their lives by Knuth; is simply out-of-date. Whilst the proofs of correctness still hold; the analysies of operational complexity -- done in terms of the imaginative, but imaginary MIX (later MMIX) -- simply do not consider the effects of caching, pipelining, branch prediction, memory stalls et al.
Stroustrup concludes that there are "3 orders of magnitude performance" to be gained.
And, the relevance of string search analysis to bit-string search, has to my knowledge, never been demonstrated. They are quite different animals.
|Replies are listed 'Best First'.|
Re^5: [OT] The interesting problem of comparing (long) bit-strings.
by salva (Canon) on Mar 31, 2015 at 06:24 UTC
by BrowserUk (Pope) on Mar 31, 2015 at 09:42 UTC