It depends on how qsort is implemented by the C
stdlib library with which perl was compiled. My guess is no,
but it really depends on how your C stdlib was implemented.
It is easy to add a "quicksort worst-case avoider"
by not using a "use the first element as the pivot" and instead
doing something like:
adding a "sorted list detector"
Picking the pivot randomly (instead of as the first element)
Shuffling the list before sorting
Using another pivot picking technique
Since not everyone knows the internals of quicksort, there
is a worst case performance of O(n^2) with quicksort if the
worst pivot is picked for each iteration (if you don't know what a
pivot is don;t worry.... if you want to know I can explain it. This
worst-case performance can happen if the list is already in
is in sorted order and the pivot is picked by choosing the first
element of the list as the pivot. However, there are
techniques for easily avoiding this pitfall.
This worst-case performance can happen if the list is already in is in sorted order and the pivot is picked by choosing the first element of the list as the pivot.
In my data structures and algorithm analysis class,
they told us to select one of the elements at random
to use as the pivot. This adds a small amount of
overhead (the amount of time needed to pick a random
number each iteration) to the average-case scenerio,
but it basically eliminates the worst-case scenerio,
effectively transforming it into an average-case
scenerio. "random number" here can be anything that
can pass as random. If your system clock has good
enough precision, you can just grab that. The
key thing is that you won't be picking the same
element every iteration -- sometimes an early
element, sometimes a late one, sometimes a middle
one. So it makes no real difference how the list
is sorted initially.
This is of course all moot now; these days we just
use Perl's built-in sort.