lua-users home
lua-l archive

[Date Prev][Date Next][Thread Prev][Thread Next] [Date Index] [Thread Index]

On Fri, 26 Jul 2013 09:29:42 -0700 (PDT)
Leo Romanoff <> wrote:

> ----- Ursprüngliche Message -----
> > Von: Steve Litt <>
> > An:
> > CC: Leo Romanoff <>
> > Gesendet: 17:20 Donnerstag, 25.Juli 2013
> > Betreff: Re: Question about accessing Lua tables and arrays faster
> > 
> > On Thu, 25 Jul 2013 08:00:00 -0700 (PDT)
> > Leo Romanoff <> wrote:
> >>  - Many high-performance applications work intensively with arrays.
> >>  Currently, Lua uses the same datatype, namely tables, for both
> >> arrays and maps. Accessing arrays from Lua is not extremely fast
> >> currently. (Lua interpreter does not use even those
> >> lua_rawseti/lua_rawgeti functions, IIRC. Only C libs for Lua can
> >> do it). It would be nice to have a faster way of operating on
> >> arrays in Lua.
> >> 
> >>  BTW, I did some experiments where I create an array of 1000000
> >>  elements and then perform a 1000 iterations over it, touching each
> >>  element and assigning a new value to it or incrementing its
> >> current value. I implemented the same algorithm in pure Lua, as a
> >> C library for Lua which uses lua_rawseti/lua_rawgeti, and as a C
> >> program. The outcome is that pure Lua solution is the slowest, the
> >> C library for Lua is roughly 1.5 times faster and C version is
> >> about 14.5 times faster than pure Lua solution. 
> > 
> > Hi Leo,
> > 
> > Comparing any interpreter to C is certain to yield disappointing
> > results. A better test would have been to compare pure Lua to Perl
> > arrays, Python lists, and whatever Ruby uses for arrays (I long
> > since forgot). If *those* are faster than Lua in the algorithm you
> > mention, then we have something to talk about.
> Just out of curiosity I created versions of the same algorithm for
> Perl and Python as you suggested.
> The outcome is: They are slightly faster by may be 5%-10% than Lua
> for this benchmark.

Perl doesn't surprise me a bit. Perl has always been a very fast runtime
interpreter. Python's a surprise I didn't expect.

Just to make sure I understand, were your only keys 1 through number of
elements? No 0, no text keys, just 1-whatever?

Also, did you take steps to minimize your metatables? An apples with
apples comparison would be minimal or no metatables. I don't know, is
it possible to set a table's metatable to nil, and if you do so, does
it affect the speed of your 1000x1000000 incrementer algorithm?

Two more things, and the first one is this: I have no doubt that the
others are faster for going over and incrementing each element. But I'm
wondering if, for more ambitious usages of elements, you could put the
element processing in a metatable, and if that would speed things up.
Obviously there's no way to write that algorithm in Perl or Python, so
you can't really A/B test them. But you could A/B test the metatable
processing method against a brute force, process in a subroutine method.

The second thing is this: Thank goodness the difference is only 5 to
10%. There are very few cases where one cares whether it takes 0.11
seconds instead of 0.10, 1.1 seconds instead of 1.0 seconds, or an hour
and six minutes rather than an hour. Add to this the fact that there
are few computer tasks these days where the software is the bottleneck,
at least for more than 5 seconds at a time, and the decision of
interpreters is probably one of which is easiest, not which one had the
runtime speed. After all, if runtime speed were the biggest priority,
Lua, Python and Perl wouldn't even exist.

If you try the "process the element using the metatable" approach,
please let us know how it works.



Steve Litt                *
Troubleshooting Training  *  Human Performance