I wonder what the profile of tables using the sort() function looks like (in terms of # of items being sorted etc)? If there is a clear boas toward smaller tables, then a dual algorithm that forks based on size of table might be a good idea (smaller tables get faster sort using a copy-based algorithm larger tables get in-place sort that is slower but doesnt take the memory hit).
Of course, this means more code and more code paths (and testing), which is not good, but is better imho than everyone rolling their own sort to bypass perceived issues with the built-in sort() function.
The only real issue I am aware of is that it is not stable. Using O(n)
space and/or 1000 lines of code (vs the current 100 lines) does not seem
a reasonable cost for solving that problem (despite all advances in
sort technology :-).
About DoS atacks based on "bad data", if that is a real problem it can
be easily solved introducing some randomness when choosing a pivot.
(I must confess that I do not know the "known worst case" Dirk talked
about.)