The scenario is this - I am creating a AST generator for Lua; the AST
is captured in a userdata. The AST is generated every time some code
is parsed - so you can imagine that this can be quite frequent. Say in
a test program, many code chunks are parsed resulting in several
userdata objects of different sizes being created. I noticed that the
GC does not collect the objects fast enough. In this scenario, how do
you suggest I should use the GC step parameter? Keep setting it to
different values based on userdata sizes? Or keep track of total
userdata memory and set this accordingly? Obviously the rate of
allocations varies over time, and sizes vary as well.
I don't think make GC more aggressive by the actual memory size annotation would be helpful in this scenario.
My solution is putting all AST in one C container . Userdata only stores an ID that associate to the AST, and reference the code (string or function) in a weak table.
And then we can remove some older AST (by LRU) every time new code is parsed, If we need an AST already removed, just rebuild from code again.
The C container of AST is only a cache, so we can also clear the cache when we need more memory.