lua-users home
lua-l archive

[Date Prev][Date Next][Thread Prev][Thread Next] [Date Index] [Thread Index]


Jacques Chester <jacques@chester.id.au> writes:

> On 03/11/2009, at 6:57 PM, Mike Pall wrote:
>> The big-O notation assumes constant overhead per access or per
>> instruction. But that's far from the truth with a modern CPU. An
>> O(n) linear search can outperform an O(log n) tree search for a
>> much larger 'n' than you might imagine. Try it!
>
> I assume that this scenario depends on n fitting into
> an L1/L2 data cache? Now that you put it this way I can
> see how a tree search would be slow -- constantly sending
> the CPU galloping off into main memory for the next
> element.
>
> There's a book in there somewhere. "Data Structures
> and Algorithms for Modern Architectures".

Knuth's "The Art of Computer Programming" contains numerous algorithms
for working with sequential-access-optimized memory (aka tape drives).
Modern memory prefetching is getting there again.

Not much new under the sun.

-- 
David Kastrup