[Date Prev][Date Next][Thread Prev][Thread Next]
[Date Index]
[Thread Index]
- Subject: Re[2]: [ANN] LuaJIT-2.0.0-beta1
- From: Bulat Ziganshin <bulat.ziganshin@...>
- Date: Wed, 4 Nov 2009 11:06:38 +0300
Hello Jacques,
Wednesday, November 4, 2009, 9:38:01 AM, you wrote:
>> The big-O notation assumes constant overhead per access or per
>> instruction. But that's far from the truth with a modern CPU. An
>> O(n) linear search can outperform an O(log n) tree search for a
>> much larger 'n' than you might imagine. Try it!
> I assume that this scenario depends on n fitting into
> an L1/L2 data cache?
no. linear memory read is much faster because 1) it's predictable, so
cpu can prefetch data, 2) usually one cache line (64 bytes, typical)
contains several data items
--
Best regards,
Bulat mailto:Bulat.Ziganshin@gmail.com