lua-users home
lua-l archive

[Date Prev][Date Next][Thread Prev][Thread Next] [Date Index] [Thread Index]


On Wed, Nov 4, 2015 at 11:17 PM, Soni L. <fakedme@gmail.com> wrote:
>
>
> On 04/11/15 06:04 PM, Roberto Ierusalimschy wrote:
>>>
>>> I wonder what the profile of tables using the sort() function looks like
>>> (in terms of # of items being sorted etc)? If there is a clear boas toward
>>> smaller tables, then a dual algorithm that forks based on size of table
>>> might be a good idea (smaller tables get faster sort using a copy-based
>>> algorithm larger tables get in-place sort that is slower but doesnt take the
>>> memory hit).
>>>
>>> Of course, this means more code and more code paths (and testing), which
>>> is not good, but is better imho than everyone rolling their own sort to
>>> bypass perceived issues with the built-in sort() function.
>>
>> The only real issue I am aware of is that it is not stable. Using O(n)
>> space and/or 1000 lines of code (vs the current 100 lines) does not seem
>> a reasonable cost for solving that problem (despite all advances in
>> sort technology :-).
>>
>> About DoS atacks based on "bad data", if that is a real problem it can
>> be easily solved introducing some randomness when choosing a pivot.
>> (I must confess that I do not know the "known worst case" Dirk talked
>> about.)
>
> Or heapsort. Not stable but O(n log n) time and O(1) space.

Or introsort.

>>
>> -- Roberto
>>
>
> --
> Disclaimer: these emails may be made public at any given time, with or
> without reason. If you don't agree with this, DO NOT REPLY.
>
>



-- 


Best regards,
Boris Nagaev