|XP is just a number|
Re^4: Range overlap (with a lot of ranges)by Marshall (Abbot)
|on May 24, 2010 at 16:00 UTC||Need Help??|
Well of course two @vars will take less memory space than a AoA because there is at least an additional array, the array of pointers to array in the AoA!
So why does this make sense?
One basic reason why data is collected into combined structures like an array of pointers to array, is so that it is clear what data "goes with" with other data. This is a huge thing in software engineering!
Another reason is that when we say sort a complex data structure that invovles pointers, we just adjust the pointer values to a single array instead of doing copies of the actual data or having to modify multiple data structures. We can do manipulations on a single "thing", again a huge plus.
I suggested an AoA, but actually the most analogous thing to a 'C' 2-D array uses even more memory, a Perl AoH, not a Perl AoA!
I suggested the AoA instead of AoH because the Op is "trying to get the job done" and there isn't going to be a huge amount of code. I think this could be just one page of Perl code. I don't see a need to "super optimize" this.
Normally the rough priority of coding, in my mind is: 1) clarity, 2) clarity, 3) speed, 4) memory usage. There of course can be reasons and very good reasons to violate this priority scheme. The speed and memory usage actually get better (become less important) according to Moore's Law. The clarity of the code is fixed when you write it. It won't get any more clear even if CPU MIP's and memory prices drop dramatically. If the code will run easily within memory budget (in this case yes) and the main objective is clear code that works (in this case yes), then optimizing the memory usage is not the right thing to do.