[Date Prev][Date Next][Thread Prev][Thread Next]
[Date Index]
[Thread Index]
- Subject: Re: table.insert and sizes list
- From: Mike Pall <mikelu-0412@...>
- Date: Fri, 10 Dec 2004 07:40:24 +0100
Hi,
Drew Powers wrote:
> Furthermore, it looks like the internal sizes table is is holding a
> non-weak reference to tables for which setn has been called.
No, it is really a weak table. What you are seeing is the size of the sizes
table itself. Due to the way it is used, it is never shrunk and occupies
sizeof(struct Node) bytes for each entry that was in use at the same time.
This is between 20 and 40 bytes per entry depending on your choice for
lua_Number, alignment requirements and the pointer size of your CPU
(this is for the hashed part of tables, which is relevant here -- the
array part takes between 8 and 16 bytes per entry).
But there are more effects at play: Lua 5.1-work2 does not shrink memory
agressively enough. The tables in your example are not collected often
enough between loop iterations (since you overwrite 't'). This increases
the number of live tables in use at one point in time. Which in turn makes
the sizes table big.
With Lua 5.1-work3 you'll get much lower numbers because there is only a
handful of tables in use at any point in time.
OTOH here's what happens when you keep the tables across loop iterations:
print(gcinfo())
local x = {}
for i = 1,65530 do
local t = {}
table.insert(t, 1)
x[i] = t
end
print(gcinfo())
x = nil
collectgarbage()
print(gcinfo())
Now we get the same effect with Lua 5.1-work3, too. Subtract the first number
from the last and divide by the number of iterations rounded up to the next
power of two. I get ~28 since I tried this on a i386 with doubles for
lua_Number.
So what does this mean for you:
- Try Lua 5.1-work3 and see if your problem goes away.
- Do not allocate too many live tables at once that use the sizes array.
- Or avoid the use of the sizes array (i.e. table.*) altogether and manage
the counts yourself.
What does this mean for Lua development:
Maybe there needs to be a way to shrink tables proactively from the GC?
Yes, I'm aware that tables do shrink when their usage changes (from the
hash part to the array part or vice versa). But there is no way to get rid
of an oversized table as long as you still hold a reference to it. Swapping
the table for a new one is not always an option.
Bye,
Mike