[Date Prev][Date Next][Thread Prev][Thread Next]
[Date Index]
[Thread Index]
- Subject: Fwd: A stable sort
- From: Eike Decker <zet23t@...>
- Date: Thu, 9 May 2013 17:22:18 +0200
>> hopefully without bugs
> It sorts, but is maximally unstable: it inverts the order of
> equal elements.
Oh. At least I didn't claim it to be stable in the second version :) -
now it is. I had to flip the check function arguments to maintain
stability.
>> It avoids recursions and function calls and the calculations are
>> pretty simple since the step size is predetermined.
> Predetermined step size tends to produce critical sort lengths
> where just one extra element makes a big jump in the sort time.
> Before claiming to be faster than anything you should check for that.
You are right on that, but my (further) testing didn't indicate this.
If there is such critical size, I would like to know where it shows.
At least the averages over a range of n elements are indicating that
it's performing consistently better. It only showed up now that it
didn't handle already sorted sequences well. I added a relatively
cheap check and it's improved now. I'll append my test code (even
though it's not pretty).
>> I think a native C implementation would be pretty straight forward
>> and fast.
> As a starting point, you're welcome to adapt my C implementation
> of Fisher's version in any way you like.
I still lack a C development environment on my new machine :(
>> Oh, and about stable sorting in general, I am in line with Fisher's
>> opinion.
> Yes. Why use an unstable algorithm when there are stable algorithms
> around that are just as fast? Old answer: because those stable
> algorithms require O(N) extra storage. Irrelevant when 2GB of memory
> is not an upper-end machine any more.
Understandable - I wonder if there could be a trick though to maintain
inplace, but probably, there isn't, unless computational runtime is
suffering.
As for how I tested (and remind me, if it's not tested, it's broken...
- should have known that in first place, I always fall for that trap),
here's what I did:
- test sorting arrays of length between a range
- four different randomized sets: Random (using math.random), sorted,
inverse sorted, partially sorted
- summing up runtimes
As all benchmarks/claims: I don't expect it to be correct.
I am appending now just the sole lua file that contains Fisher's and
my version and the test code. I also checked the sorted sequences that
I produce against Fisher's version, which is what I should have done
in first place.
Also added is now the output of the testrun:
rand
table.sort= 11.624 msort= 21.291 stable_sort= 23.787
inv_sorted
table.sort= 14.472000000001 msort= 15.549 stable_sort= 19.673
part_sorted
table.sort= 13.055 msort= 16.577 stable_sort= 17.537
sorted
table.sort= 9.7219999999999 msort= 8.2539999999995 stable_sort= 11.715
The randomized sorting is only slightly faster in average than
Fisher's stable_sort function. The inverse sorted array is a good deal
faster and due to the relatively cheap check if the arrays are sorted,
sorting a sorted sequence is even faster than table.sort - and remind
you, this is not LuaJIT.
Cheers,
Eike
Attachment:
sorttest.lua
Description: Binary data