Beefy Boxes and Bandwidth Generously Provided by pair Networks
Welcome to the Monastery
 
PerlMonks  

Shift, Pop, Unshift and Push with Impunity!

by lhoward (Vicar)
on Jun 13, 2000 at 19:15 UTC ( [id://17890]=perlmeditation: print w/replies, xml ) Need Help??

or: How I Learned to Stop Worrying and Love Perl's Built-in List Operators

I come from a CS background. I've had algorithmic efficiency beat into me till it was coming out of my ears. After so many classes on data-structures and algorithm analysis I've become wary of built-in, higher-level operators like Perl's push, pop, shift and unshift list manipulation functions. Oh I've used them plenty, but there has always been this nagging little voice in the back of my head saying What if unshift takes longer to run with a larger list? and does it take longer to find the size of a bigger list?.

Questions like these led me on a spiritual quest through the bowels of Perl's source code.... The details of my journey follow.

Now before I dig too deep, I want to introduce the uninitiated to big-O notation. Big-O notation if a method of describing the efficiency of an algorithm as it relates in a very general way to the quantity of data the algorithm is operating on. Urri Guttman and Larry Rossler's article A Fresh Look at Efficient Perl Sorting (in addition to being a great article in its own rite) has a fantastic overview of big-O notation much better than what I have cobbled together here. Some common big-O analyses are as follows (in order from best to worst performance-wise).

  • O(1) - algorithm takes a constant amount of time to run no matter what the dataset size. A good example of this is a hash table lookup.
  • O(log(n)) - algorithm performance is proportional to the logarithm of the dataset size. A binary search is O(log(n)).
  • O(n) - algorithm runtime scales linearly with dataset size. If it took 10 seconds with 200 elements, it will take 20 seconds with 400 elements. Scanning a list for an element with a particular value is O(n).
  • O(n*log(n)) - the best sorting algorithms (quicksort, mergesort) take this long to run.
  • O(n^2) - algorithm performance is proportional to the dataset size squared. Basic sorting algorithms (selection sort, insertion sort, bubble sort) take this long to run.
  • O(2^n) - only the slowest algorithms take this long to run. Traveling salesman problem, etc.

A straightforward list implementation that every introductory CS student learns uses an array and an offset to the last in-use array element to represent a list. This implementation works well for push, pop, determining number of array elements, insertion, fetching, etc. However, performance of shift and unshift with this implementation is horrible: O(n)! This is because there is no room at the beginning of the array and all the elements have to be shifted by one to insert/remove an element from the front of the list.

There are many other ways to implement lists (some better, some worse, others with different performance tradeoffs), but without knowing which implementation Perl used (or at least seeing some notes in perlfunc or some other piece of perl documentation about the performance of these list manipulation functions) I was having trouble sleeping at night.

Thanks to some source-code browsing and the Perl Internals chapter of Advanced Perl Programming I am now happy and content to use all of Perl's built-in list operators without any loss of sleep.

Perl implements lists with an array and first/last element offsets. The array is allocated larger than needed with the offsets originally pointing in the middle of the array so there is room to grow in both directions (unshifts and pushes/inserts) before a re-allocation of the underlying array is necessary. The consequence of this implementation is that all of perl's primitive list operators (insertion, fetching, determining array size, push, pop, shift, unshift, etc.) perform in O(1) time.

There are some worst-case scenarios of push, unshift and insertion where performance is O(n). These occur when the array upon which the list is implemented has to be reallocated because the operation would access the array beyond its preallocated size. This reallocation overhead is a common factor in all list implementations that handle all the primitive operators in O(1) time.

One consequence of perl's list implementation is that queues implemented using perl lists end up "creeping forward" through the preallocated array space leading to reallocations even though the queue itself may never contain many elements. In comparison, a stack implemented with a perl list will only require reallocations as the list grows larger. However, perl is smartly coded because the use of lists as queues was anticipated. Consequently, these queue-type reallocations have a negligible impact on performance. In benchmarked tests, queue access of a list (using repeated push/shift operations) is nearly as fast as stack access to a list (using repeated push/pop operations).

The preallocated array is balanced toward the 0 end of the array (meaning there is more free space at the far end of the list than there is before the list's 0 element). This is done purposely to make pushes more efficient than unshifts (since perl programmers use push much more than they use unshift). As a consequence, a given number of unshifts will result in more reallocations than the same number of pushes. My benchmarking of a front-loaded queue (implemented using unshift and pop) shows it to be nearly %20 slower than a back-loaded queue (implemented using push and shift); I assume for this very reason.

  • Comment on Shift, Pop, Unshift and Push with Impunity!

Replies are listed 'Best First'.
RE: Shift, Pop, Unshift and Push with Impunity!
by Shendal (Hermit) on Jun 13, 2000 at 19:31 UTC
    Fantastic post. It's information like this that keeps me coming back to Perl Monks.

    I'm fascinated at the internal aspects to Perl. I'd be interested in seeing a (somewhat) regular post from the experts in this area (merlyn?) to give some insight on different perl internals. Perhaps some insight into not only what it is, but why. We could even set up a question submission (a la, "ask slashdot") and send the higher-voted questions to the saints.

    What do the rest of you think?
      Same here! I share more of a curiosity than an interest per say, since I can't code a word in C, but I'd love to see these kinds of posts coming back. Not only is the above informative, but well written, and (I can only suppose) correct. In some ways, it reminds me of the discussions on Pi, perfect circles, gravity, Zen, et al, I have with friends over a beer on Sunday afternoons. The obvious catch, is that this is something that will be of use to me on a day-to-day basis.

      Kick-ass!

      #!/home/bbq/bin/perl
      # Trust no1!
RE: Shift, Pop, Unshift and Push with Impunity!
by Ovid (Cardinal) on Jun 13, 2000 at 21:10 UTC
    I also loved this article and voted ++ for it. What really makes it outstanding (for me) is that it is clear, concise, and not so hideously technical that I can't understand it.

    I would like to know a lot more about the internals of Perl, but some of the articles that I have read have assumed such a high level of knowledge on part of the reader that I just put the article down and pretend to pick lint off my sleeve or something (and some have erroneously assumed a high level of knowledge on the part of the author).

    lhoward: I love your posts, keep 'em coming.

RE: Shift, Pop, Unshift and Push with Impunity!
by Anonymous Monk on Jun 15, 2000 at 22:47 UTC
    About hash table lookup. In theory, if you have infinite ammount of memory, you can approach O(1) time. As I understand all really implemented hash algorithms with finite memory will take, strictly speaking, O(n) time. There is an example of unsuccessful order of data that will make Perl's implementation of hashes to take O(n) for searching. In practice you'll hardly will be much worse than O(1).
      Says nobody in particular:
      There is an example of unsuccessful order of data that will make Perl's implementation of hashes to take O(n) for searching.
      When Hashes Go Wrong

        The following is a slight modification of your program. It (ab)uses the fact that Perl only checks whether to split buckets when you use a new bucket. It does not make any assumptions about Perl's hashing algorithm. It is therefore able to generate the special keys much more quickly than yours does.
        my $HOW_MANY = shift || 100_000; my ($s, $e1, $e2); $s = time; for ($i=0; $i<$HOW_MANY; $i++) { $q{$i}=1; } $e1 = time - $s; print qq{ Normally, it takes me about $e1 second(s) to put $HOW_MANY items into +a hash. Now I'm going to construct $HOW_MANY special keys and see how long it takes to put those into a hash instead. }; undef %q; print "Generating $HOW_MANY keys\n"; my @keys; my $i = 0; while (@keys < $HOW_MANY) { $i++; my %hash = (0 => 0, $i => 0); if (%hash =~ /1/) { # Goes in the same bucket as 0 push @keys, $i; } } print "Putting the $HOW_MANY special keys into the hash.\n"; $i = 0; $lasttime = $s = time; foreach $key (@keys) { $h{$key} = 1; $i++; if (time() - $lasttime > 5) { my $h = %h + 0; print "I have put $i keys into the hash, using $h bucket(s)\n" ; $lasttime = time; } } $e2 = time - $s; print qq{ The $HOW_MANY special keys took $e2 seconds to put into the hash, instead of $e1 seconds. };

        Update: (Much later.) Perl changed when it splits buckets so this program no longer runs slowly. Furthermore Perl's hashing algorithm is now random, so Dominus' program no longer runs slowly. Shucks.

Re: Shift, Pop, Unshift and Push with Impunity!
by demerphq (Chancellor) on Sep 04, 2002 at 20:43 UTC
    Hi. I thought this was a great node, and have recommended its reading and repeated its contents in a number of situations.

    One of them was in correspondence with Dominus where he suggested quite strongly that some of what it said is incorrect (specifically the last paragraph). As my C skills arent sufficient to determine the facts on my own im a little curious as to where things stand right now.

    Is the information in this node correct? If not, is this due to an implementation change?

    I would really like to know the current status of the validity of the information in this node.

    I'm somewhat doubtful as to how long this node would have lasted if it was totally incorrect without someone saying so, but on the other hand Dominus has a reputation for knowing his stuff so im in a bit of a quandry as to what to think.

    Please advise!

    Yves / DeMerphq
    ---
    Software Engineering is Programming when you can't. -- E. W. Dijkstra (RIP)

      Speeding up unshift may explain what is going on.

      Through Perl 5.6.1, unshift left the array aligned on the left. So if you build up an array with unshift's it is forced to realign it on every single unshift. If you build an array with repeated unshifts this makes the cost of each linear, for quadratic total time. What tilly's patch does is realign it indented to the length of the current array. So you only realign after you unshift the length of the array elements. In the simple case of adding one element at a time, this boils down to realigning at powers of 2, for a constant amortized cost.

      The code could be more efficient, but there don't seem to be order of magnitude algorithmic improvements left.

Re: Shift, Pop, Unshift and Push with Impunity!
by Elliott (Pilgrim) on Oct 17, 2001 at 23:55 UTC
    O(2^n) - only the slowest algorithms take this long to run. Traveling salesman problem, etc.

    I could be wrong, but isn't the Travelling Salesman problem O(n!) - i.e. much worse than O(2^n) for n>3.

    Travelling Salesman is an NP-complete problem, i.e. it cannot be solved in a measure of time which is a polynomial function of the number of data points.

      David Eppstein claims that it is possible to solve the TSP in O(n*2^n) time, which would simultaneously be worse than O(2^n) and yet much better than O(n!). In any event, these are clearly the sorts of algorithms we want to avoid. I'm going to go off on a qualifying tangent and come back to explain why algorithms worse than about O(n^2) are bad news, even for those of us who will happily implement an O(n) algorithm without worrying about whether it's possible to do it in O(1).

      I'm not generally terribly concerned about algorithm efficiency most of the time, for several reasons. Perhaps most important, I'm more concerned with maintainability; I'd usually rather have a function that runs in half a second and is clear, versus one that runs in 3 picoseconds and frightens people who look at the source. Since Perl provides built-in functions for sort, push, pop, shift, and unshift, I would never think about coding my own replacements for them. That would be inane, a huge waste of time. It would be inane and a huge waste of time *even* if I could slightly improve on the efficiency (which, of course, I can't), unless I were writing the replacement in C to be included into Perl (which, if you know anything about my relationship with C, is not going to happen).

      The CPU meter on my Gnome panel right now shows a big blank area, with little dots along the bottom edge denoting small amounts of CPU activity. I keep the CPU meter there not because I normally need it, but because if a process happens to run away I want to know. The astute will note that this implies I might not even *notice* a runaway process using all my CPU time without seeing it on the meter. This is because I'm sitting here using a 1.something GHz Celeron system to do mostly the same sorts of things (web browsing, word processing, text editing, ...) that were possible to do on a 486. The apps I use today have a lot more features, so they're bigger, so they consume a lot more RAM -- but not a lot more CPU time. (Somewhat more, yes; not enough more to keep up with the preposterously high clock speeds on the market today.) Consequently, in terms of user-visible delays and perceived performance, with the normal sizes of datasets that I use on a regular basis, an O(n) algorithm that can run entirely out of RAM is quite possibly faster than an O(1) algorithm that has to read something from disk (assuming the something isn't cached). Certainly, for any normal-sized dataset (by which I mean, some amount of data a normal person is likely to have on a desktop system), O(n) is instant, and even O(n^2) is unlikely to cause a user-perceptible delay in most cases. Yes, some applications have to be designed to deal with larger amounts of data; if RDBMS lookups were O(n^2), that would be a problem in serverspace. I've also run into quite perceptible delays when handling my email, since I get quite a lot of email, and so I suppose that an algorithmic improvement there would be fairly worthwhile for me. These are just examples. My point was, though, that for many situations an O(n^2) algorithm may not be noticed by the user; it's probably not optimal, but it may pass as "good enough" in some situations.

      A point comes, though, when you have to draw a line and say, "this algorithm is too slow". I personally draw that line after O(n^2). Anything slower than that, I call too slow.

      Take a dataset size of 200, for example, a nice medium-sized dataset. (Maybe it's the number of files in a directory, or something.) O(n^2) comes to 40000 somethings; if we say that's tenths of milliseconds (in reality, it would be less than that on modern hardware, but go with me here), the operation completes in 4 seconds, which for something that doesn't happen very often (especially, something that only happens when the user deliberately orders it, such as starting an app) is reasonable. Now, let's say we did the same thing in O(2^n) time. It would take aeons. Lots of aeons. I'd say millenia, but that would be underestimating it. You can see the problem. Even if we get faster hardware so that the 4-second operation completes in 4 nanoseconds, the O(2^n) algorithm is still taking multiple lifetimes. O(n!) is even worse.

      So that's why even people who don't generally care about efficiency still have to think about algorithm efficiency just enough to avoid the really *bad* algorithms. With O(n^2), the size of the data set could blow up to 1000 (quite a lot of somethings, for a desktop user), and we'd still be running in under two minutes (on the same scale that takes 4 seconds for n=200). Two minutes may be a minute and fifty seconds too long, but the operation will complete. Start messing around with O(2^n) or O(n!) algorithms and you could be waiting until you die.

      Note that I'm not saying O(n^2) is "fast enough" for all purposes. (When n can blow up to hundreds of thousands, it gets to be a very significant nuisance (though it will still complete within your lifetime, and quite possibly within your computer's lifetime even, and if you sic a good-sized cluster on it you can brute force it in days).) What I am saying is that O(n^2) is "fast enough" for situations where brute force will do, but some algorithms are too slow even for that.


      $;=sub{$/};@;=map{my($a,$b)=($_,$;);$;=sub{$a.$b->()}} split//,".rekcah lreP rehtona tsuJ";$\=$ ;->();print$/
RE: Shift, Pop, Unshift and Push with Impunity! (School)
by Bourgeois_Rage (Beadle) on Jun 13, 2000 at 23:32 UTC
    I learned about all those sorting algorithms in school.
    It's good to see that people continue to think about them after school. Truthfully the only time I've used these sorts is in school.

    On a side note I'd just like to say that I like using Perl's Stack methods much more than C++, which I learned them in. This relevant because it's one of those things you learn in school and then never see too often in the real world.
    -Rage
RE: Shift, Pop, Unshift and Push with Impunity!
by brick (Sexton) on Jun 14, 2000 at 05:42 UTC
    I'd just had a conversation on email with a friend over this very topic--using built in functions vs. rewriting the wheel, and it's comforting to know that the ones in perl aren't too greedy! I agree, too...this sort of thing keeps me coming back here every day. -brick.
RE: Shift, Pop, Unshift and Push with Impunity!
by Eugene (Scribe) on Jun 13, 2000 at 21:04 UTC
    I am curious now, what is the order of a sort function and what algorithm does it use?
      Perl's sort function sits on top of C's native qsort function which is an implementation of quicksort. O(n*log(n))!
        No longer. Tom Christiansen rewrote it for perl 5.005. See Perl Delta.

        EDITED

        Hmmm..my memory may be wrong. I still seem to recall that Tom Christiansen wrote it, but according to the comments in the source-code it was actually written by Tom Horsley. You can find the implementation Perl uses by looking in the pp_ctl.c source-code file.

        Cheers,
        Ben

        Does that mean that pre-sorted lists are sorted slowly, or is it being randomized before the actual sort?
You're the man now, dog
by OfficeLinebacker (Chaplain) on Jan 11, 2007 at 02:47 UTC

    Wow! Great node. ++! I must admit I was quite verklempt when I got 404'd when trying to follow the original link to the Guttman paper. Thanks for the update, aufflick.

    BTW, lhoward, you should write for wikipedia. I have looked at material about sorting algorithms and I have not seen a more concise explanation anywhere (wikipedia or elsewhere).

    I like computer programming because it's like Legos for the mind.
      Sorry for this frivolous post but I felt compelled to up-vote OfficeLinebacker for the Yiddish contribution to PerlMonks.
Re: Shift, Pop, Unshift and Push with Impunity!
by aufflick (Deacon) on Nov 07, 2005 at 04:18 UTC
Re: Shift, Pop, Unshift and Push with Impunity!
by Piercer (Beadle) on Mar 11, 2003 at 16:51 UTC
    This should probably be called Dr Lhoward or How I learned to stop worrying and love the stack!

Log In?
Username:
Password:

What's my password?
Create A New User
Domain Nodelet?
Node Status?
node history
Node Type: perlmeditation [id://17890]
Approved by root
help
Chatterbox?
and the web crawler heard nothing...

How do I use this?Last hourOther CB clients
Other Users?
Others scrutinizing the Monastery: (3)
As of 2024-11-10 06:32 GMT
Sections?
Information?
Find Nodes?
Leftovers?
    Voting Booth?
    chatterbot is...






    Results (37 votes). Check out past polls.