Thanks for the suggestions,
If malloc (realloc) is doing as you suggest David, and I have no reason to suspect it isn't, then the continual growth in memory use is still confusing me. What I was confused about is why, when under the control of the gc, the lua program continues to request
memory when it doesn't need it?
I have tried Roberto's approach and kept the value of gcinfo around the loop to compare the memory usage of the host system and the embedded one. The result was interesting and still leaves me with a puzzle. First the code I am running on both the desktop machine
and the embedded target;
<code>
bigtab=setmetatable({}, {__mode="k"})
mem=gcinfo()
for i=1,math.huge do
bigtab[{}]=i
local now = gcinfo()
if now > mem then
mem=now
print("New mem : " .. mem .. "(" .. i .. ")")
end
end
</code>
For the embedded system the memory in use seems to settle around 250k at somewhere around 5000 iterations. The embedded system still runs out of memory after a few million iterations. Then I tinkered with the garbage collector pause and stepmul parameters in
the embedded system. I found that a pause of 150% and a stepmul of around 400% keeps the memory useage below 100K effectively infinitely.
This posed the question then, why? The embedded system is slower than the desktop one, by orders of magnitude, and has no virtual memory. However, running the same code why should the behaviour differ? I am assuming that the gc runs in a deterministic manner,
for instance for the same code it will be triggered at the same memory thresholds, do the same amount of work per k, etc.. My understanding is that it will take longer to run X iterations but the execution path and memory usage would be the same.
Obviously this isn't the case, but why? Any references would be appreciated.
--
mailto:dean.sellers@rinstrum.com
__________ Information from ESET NOD32 Antivirus, version of virus signature database 7137 (20120514) __________
The message was checked by ESET NOD32 Antivirus.
http://www.eset.com