|Perl: the Markov chain saw|
Re: Re: Re: Re: Re: Re: Re: A (memory) poor man's hashby tilly (Archbishop)
|on Nov 25, 2003 at 15:58 UTC||Need Help??|
I'm guessing that by BOM you mean Byte-Order Marking. (This based on a google search for an unknown acronym.) Which probably refers to the character escaping suggestion that I gave.
Please define what you mean by, "won't work". Won't work as in doesn't do what you want? Possible, I don't know what you want. Won't work as in doesn't do what I said it does? That's another story. It does exactly what I said.
If you want to allow your key/value pairs to be able to hold arbitrary binary data in your datastructure, pre and post process them as I described and you will succeed. The preprocessing gets rid of the character that you are using as a separator. The postprocessing recovers the original string data. Adding any processing takes time, so there is going to be some performance hit. My guess is not much of one since Perl's RE engine is pretty fast and the REs in question are pretty simple.
So what you summarize by, Nah. Using BOMs won't work. actually works exactly like I said it did.
As for the rest, please be specific in your complaints. Vaguely waving hands doesn't give me much feedback.
Here is a specific example to illustrate what I mean.
I could have taken that particular thread back further, but that is far enough.[...]. Instead, you introduce a subject vaguely related to the original subject matter, open with an obvious counter to a non-sequita not in in discussion, and then support that obvious arguement at length, with the implication that if you said "it is", then your opponent must have already said it isn't.[...][...]I didn't say that I consider cache optimisation unimportant. I do doubt that it is possible, in a meaningful way, for cross-platform development, or even practical for most purposes unless it is performed by compilers or interpreters tuned to the target platform.[...][...]With cache optimization, we need to specify our goals first. If your goal is to achieve a universal large win, or to achieve any kind of ideal optimization, then optimizing cache coherency is an impossible ideal. But that isn't my goal. My goal would be to have data structures which will tend to perform better. And that is quite possible.[...]
From my point of view, your phrase in a meaningful way is unclear in the extreme. I don't know what you mean by that. I know what I would mean by that, and it clearly isn't what you mean because I come to opposite conclusions. So I took pains to explain exactly how I would understand that phrase, and why my understanding leads me to a different conclusion than you came to.
My hope was that by making it clear exactly where and why we differ in our views that we could clarify the difference in our perspectives. But it seems that you have misinterpreted that as being a negative argument against you. :-(
I don't think that basic facts are really in dispute. Let me summarize them. Something like Judy arrays attempt to dynamically optimize themselves to account for the cost of cache misses. I think that we agree on the following facts about Judy arrays:
As I see it there are a few possible causes for that disagreement:
But I'm not sure of that guess, and I really don't understand the value system which makes performance that big of a goal.