• Subject: Re: tables
• From: "Paul Chiusano" <paul.chiusano@...>
• Date: Mon, 6 Mar 2006 18:40:26 -0500

```> Convenient maybe, but also more expensive.  The current # implementation
> is O(log(N)) where N is the number of entries in the array part.

I thought the # operator was faster, since even though it is
technically a logarithmic operation, it avoids having to read and
write to a variable/table. My expectation is that for building
enormous tables, it would end up being slower, but I decided to do a
few little tests of my own just to get a sense for the behavior.
Here's some results that might be illuminating:

The regular lua 5.1 way, I'll call it (#):
\$ time lua -e 'for r=1,25 do local t = {}; for i=1,1e5 do t[#t+1] = i end end'
user    0m1.230s

The store-in-a-table way, call it (t.n)
\$ time lua -e 'for r=1,25 do local t = {n=0}; for i=1,1e5 do t.n =
t.n+1; t[t.n] = i end end'
user    0m1.319s

The store-in-a-local way: (local n)
\$ time lua -e 'for r=1,25 do local t = {}; local n = 0; for i=1,1e5 do
n=n+1 t[n] = i end end'
user    0m0.640s

So for building tables of size 100,000, using the (#) method is faster
than explicitly storing the size in a field in the table, but slower
than explicitly tracking the size in a local variable (well, at least
on this machine - it's a 1.5 GHz G4). For building a table of size
10,000, I get the following times:

100 trials
(#)            0m0.393s
(t.n)          0m0.499s
(local n)  0m0.229s

(local n) is still the fastest, but wouldn't be surprised if for still
lower table sizes, the cost of reading and writing even the local
variable wouldn't be worth it, as that constant time operation would
still be slower than running what is technically a logarithmic time
operation. Here's the same results for building a table of size 100:
(local n) is still faster, but by a smaller margin.

10000 trials
(#)           0m0.425s
(t.n)         0m0.649s
(local n) 0m0.323s

I wonder if/when (#) and (local n) converge? It might be nice to plot
these three methods as a function of table size.

For a million elements, storing the size in a table field is just a
bit faster than using (#).

5 trials
(#)           0m2.949s
(local n) 0m1.313s
(t.n)         0m2.663s

If you are actually accumulating a huge table and you are performing
close to zero computation for each append (as is the case for this
example), then it looks like it could be worth tracking the size in a
local variable, assuming this part of the code is in fact a bottleneck
in the program. But if the table being created is small, or there is
some computation being done for each append (like allocating some
memory, computing a square root, or whatever), then I would imagine
that using # would at worst decrease execution speed by only a few
percentage points.

-Paul

```