lua-users home
lua-l archive

[Date Prev][Date Next][Thread Prev][Thread Next] [Date Index] [Thread Index]

On 12/01/2011, at 1:38 AM, Mike Pall wrote:

> Geoff Leyland wrote:
>> It took a while to work out how to stop LuaJIT optimising out
>> the loops altogether - it might still be optimising more than I
>> want.  I'm sure this is not a good test, but for what it's
>> worth:
> Well, LuaJIT _is_ optimizing away all of the proxy overhead, of
> course. :-)

But in my first attempt I think it went further and just worked out the loops did nothing, and so ... did nothing.

> But I was talking about Lua, not LuaJIT.

It's always interesting to test both :-) (and there is a difference with LuaJIT)

> And about hash keys, e.g.
> strings. A pattern like this will show the worst case:
>  p.x = 1; p.x = 1; p.y = 1; p.z = 1; p.z = 1; p.z = 1; p.y = 1;
> This is 40%-50% slower with table-based proxies vs. userdata
> proxies. The actual cost is much higher -- it's shadowed by the
> interpreter overhead.

Thanks, I tested just this (Lua) and proxies got 1.35 times more cycles/second than tables.  I also tested with variable indexes (using string.char), and the difference is more like 1.2, but I don't know how much of that can be attributed to indexes changing and to the extra time on string.char.  For LuaJIT the speedups with proxies vs proxy tables are actually higher (~1.4 and 1.3) so this might be worth looking into.

As Roberto said,

> I think we should see the costs in real programs.

So next time I have some time for Rima, I'll try to get it to interchangeably use newproxy and proxy tables and see if the difference actually shows up in all the noise.