This is definitely the kind of discussion I would like to be part of it.
People are talking with solid facts, insightful thoughts, strong supporting data ... And also talk with respect to each other, at the same time, with respect to facts found.
The other thing I would be interested in, is the performance difference between using linked list and (dynamically growing) array.
By using linked list, you would call malloc once for each element, but no realloc is called; by using array, you would call malloc once at the beginning, and then call realloc each time when it grows. I used the two approaches from time to time, but never seriously measured them.
Also you might mix the two approaches, by using a linked list of sub-arrays. the size of each array is fixed, and we grow the data structure by growing the linked list, attaching more sub-arrays to the linked list. | [reply] |
With a dynamically growing array, growing it by a fixed amount has a O(1) amortized cost, with unpredictable performance on any append (you may allocate). Accessing any element is O(1). Finding the size is O(1). Splicing a new element into the interior is O(n).
A linked list has a guaranteed O(1) cost to grow. Splicing is also O(1). Accessing an arbitrary element is O(n). Finding the size is O(n). Splicing a new element into the interior is O(1).
Whether the linked list or array has more overhead is implementation and dataset dependent.
As for your mixed approach, you can do that and some have. Check out http://www.and.org/vstr/overview.html for an example of a string library that does. (I have not looked closely at it.)
| [reply] |