lua-users home
lua-l archive

[Date Prev][Date Next][Thread Prev][Thread Next] [Date Index] [Thread Index]

Looking at the implementation of table.concat, what seems to happen is that Lua uses a buffer for the result which presumably grows exponentially with powers of two (hence leading to twice the memory usage). I could see a minor point being made for this since coercion requires table.concat to handle numbers; to "act wisely" it'd have to stringify these numbers, count the bytes, then stringify them again when writing to the buffer. Using a dynamic buffer it only has to stringify the numbers once.

On 18.07.22 13:26, Egor Skriptunoff wrote:

I'm trying to concatenate huge strings using table.concat()

local n=7
local a = { ("a"):rep(2^30) }
for j = 2, n do
   a[j] = a[1]
local b = table.concat(a)
print("GBytes used:", #b/2^30, collectgarbage("count")/2^20)

I expect table.concat to act wisely:
it should calculate the result string length, allocate its memory and fill it with the result content.
So, for concatenating n copies of 1GB-string only (n+1) GBytes of memory would be needed:
1 GByte for the source string and n GBytes for the result string.

Contrary to my expectation, on a 16 GB RAM Linux machine: n=7 works OK, but n=8 starts eating swap space.
It looks like table.concat temporarily uses twice more memory than it is actually needed for a clever concatenation.

Unlike table.concat, the usual Lua concatenation operator does not have this problem.  It may be used as a workaround, but it is limited by the number of VM registers.