lua-users home
lua-l archive

[Date Prev][Date Next][Thread Prev][Thread Next] [Date Index] [Thread Index]


Hello Jacques,

Wednesday, November 4, 2009, 9:38:01 AM, you wrote:

>> The big-O notation assumes constant overhead per access or per
>> instruction. But that's far from the truth with a modern CPU. An
>> O(n) linear search can outperform an O(log n) tree search for a
>> much larger 'n' than you might imagine. Try it!

> I assume that this scenario depends on n fitting into
> an L1/L2 data cache?

no. linear memory read is much faster because 1) it's predictable, so
cpu can prefetch data, 2) usually one cache line (64 bytes, typical)
contains several data items


-- 
Best regards,
 Bulat                            mailto:Bulat.Ziganshin@gmail.com