lua-users home
lua-l archive

[Date Prev][Date Next][Thread Prev][Thread Next] [Date Index] [Thread Index]


These are the kind of tricks that elua and nodemcu employ to reduce RAM usage (where the "one spot" is in execute-in-place ROM rather than RAM), but they are not possible in stock Lua.

There are a few things it'd be nice to be able to tell stock Lua are static data it doesn't need to take a RAM copy of. C string literals are another example, although caveat emptor would have to apply, as of course even static data can stop being accessible if the code segment is unloaded.

I have a patch for lua_pushliteral() somewhere which did that, but I think it employed NodeMCU-specific tricks. The NodeMCU changes to make compiled code not need copying to RAM are unfortunately quite difficult to extract as it's tied up in the LFS and ROTables implementation.

If you really wanted to start hacking, you'd need to first define an alternative API to lua_load() because lua_load doesn't assume the data to be loaded exists contiguously in memory, then you'd need to look at loadCode() in lundump.c, and in the cases where you can be certain the existing code data will outlast the lua_State, you could just set f->code to be a pointer into the existing data, I think. But given how the lua_Reader mechanism works I think you'd probably end up having to reimplement pretty much the entire load, because everything about the loader is designed around being agnostic as to where the data comes from (which in other circumstances is a really nice property to have).

To me, Lua feels pretty much "done" as a language and the current version has everything I need, and one of the few places where I could see value in a hypothetical 5.5 would be in some readonly memory optimisations along these lines so projects like NodeMCU wouldn't have to patch the codebase quite so aggressively. (It's still far better than say micropython which has to provide a ground-up rewritten Python implementation to run on embedded hardware.)

Cheers,

Tom

> On 19 Oct 2023, at 02:36, Keith Cancel <admin@keith.pro> wrote:
> 
> So trying to do something like this would interfere with the GC? But
> Yea I just want to store the bytecode of the code in one spot and not
> have other copies all over the place in memory when I have multiple
> VMs.
> 
> On Wed, Oct 18, 2023 at 1:43 PM Sean Conner <sean@conman.org> wrote:
>> 
>> It was thus said that the Great Iurii Belobeev once stated:
>>> On Wed, 18 Oct 2023 at 21:36, Keith Cancel <admin@keith.pro> wrote:
>>> 
>>>> I was looking at the source code trying to see if there was a way
>>>> maybe to achieve something similar to lua_load(), but at compile time.
>>>> Basically, I want to bunch a shared code to only occupy the memory
>>>> once instead of creating copies for each new lua VM instance. I
>>>> basically just want that code in ROM that can be executed by each new
>>>> instance without having to copy it over first.
>>> 
>>> you can precompile lua text source by calling lua_dump and safely put it
>>> into shared memory. it will produce binary chunk which then can be
>>> converted to lua function by calling lua_load on it. you cannot reuse
>>> result of lua_load since global table _G is pushed as upvalue to the
>>> resulting function, i.e. is not reusable.
>> 
>>  I don't think that will acccomplish what Keith asked for.  I think Keith
>> is asking, can I, once Lua code is compiled, share the resulting VM code
>> among different Lua states in the same process?  Yes, you can use lua_dump()
>> and store the code into shared memory (either at run time, or at compile
>> time [1]), but when calling lua_load() memory is allocated to store the code
>> into the currently running VM.
>> 
>>  An approach would be to get pointers to the underlying structures in the
>> initial VM, and then somehow make them available to another VM, but you
>> would have to ensure the structures are immune to garbage collection, else
>> bad things could happen.  That might be good for a hack, but not for
>> production code.
>> 
>>  -spc
>> 
>> [1]     As I outlined here: https://boston.conman.org/2023/03/22.1