[Date Prev][Date Next][Thread Prev][Thread Next]
[Date Index]
[Thread Index]
- Subject: table.concat - why so complex?
- From: "Martijn van Buul (Ellips B.V.)" <martijn.van.buul@...>
- Date: Tue, 6 Apr 2010 14:28:41 +0200
Hi,
While trying to find out why the runtime of my lua functions isn't as
constant
as I'd like them to be, I stumbled over table.concat, and I've since
been
wondering why it is implemented the way it is.
My naive solution would be simple: Iterate over the array, add the
string
length of its members, add the required storage for any supplied
separator,
allocate the target string to the right, and memcpy() the array members
(and
any separators) over to their final location. Required memory overhead:
None.
Amount of memory moved: Exactly the size of the target string.
Computational
overhead: One iteration over all members to determine final length - but
this
should be a relatively fast operation since string length is known.
Instead, table.concat uses a very complex algorithm, partially described
in
PIL section 11.6, in which it creates a stack of strings which are
concatenated in a tower of hanoi-esque way. What's the point of doing
that?
As far as I can see, the only result is *more* memory being allocated
through
realloc (possibly leading to a hidden memmove), and data being copied
more
than once, except for trivial cases where everything fits into a single
luaBuffer.
I'm pretty sure I'm overlooking something - but what?
Kind regards,
Martijn (who still wonders why he's seeing a variation of +/- 10%
runtime on
constant data)