[Date Prev][Date Next][Thread Prev][Thread Next]
[Date Index]
[Thread Index]
- Subject: Re: Tables vs closures for representing objects (and JITability)
- From: Jerome Vuarand <jerome.vuarand@...>
- Date: Thu, 12 Nov 2009 17:04:25 +0100
2009/11/12 Matthew Wild <mwild1@gmail.com>:
> Hi all,
>
> I've come into a little debate recently. We're working on an API,
> which will be used to create *lots* of small objects. Performance is
> critical above everything else (don't shout at me for this :) ).
>
> The debate is whether to represent objects the standard way - as
> tables of methods (the methods being closures for efficiency), or as
> just a single closure each, that takes the method name as its first
> parameter, and has if/elseif. We're looking at about a dozen fixed
> methods max per object.
With a list of 12 if/elseif cases, you have between 1 and 12
comparisons per method call. With a table index (ie. traditionnal
methods), you have a single hash table lookup per method call. You can
do a simple benchmark:
- If a lookup is faster than a single comparison, go for the tables.
- If a lookup is more expensive than 12 comparisons, go for the closures.
- If a lookup costs something between 1 and 12 comparisons, the best
choice depends on the usage patterns of your methods. With random
usage, you will get an average of 6 comparisons (to compare with a
single lookup). If method usage is uneven, you can optimise the
closures by properly ordering the if/elseif clauses, and get less than
6 comparisons per call.
> The latter seems like it would win out, and produce less garbage, etc.
> The downsides are obvious - it isn't possible (well, easily) to add
> properties to the object dynamically- so I don't want to do it
> needlessly. Is this the only thing I'm trading for speed?
>
> One of the main things I'm also interested in is which approach would
> be most JIT-friendly. The latter representation of objects is
> uncommon, so I'm concerned LuaJIT may already be optimised for
> tables-as-objects, and I'll be wasting my time.
>
> In anticipation of replies... the argument that any gain wouldn't be
> noticeable and therefore isn't worth it doesn't really hold out...
> *everything* becomes noticeable once you multiply by a large number :)
A 0.0001% gain, even multiplied by a huge number, is still
unnoticeable compared to total processing time (unless you can notice
a 1 second improvement over two weeks of computing).