lua-users home
lua-l archive

[Date Prev][Date Next][Thread Prev][Thread Next] [Date Index] [Thread Index]




In my MMO game server, I create about 10K lua vm in one process. The average size of them is about  10M. (So it needs more than 100G altogether, I run it on a 128G RAM server) They works together like erlang's processes.

We have a large tree based document read from a large xml or json file. About 10M size and more than 100K records. It's immutable and shared among all of the lua vm.

We can't load this document in each lua vm because of size (And parsing time of the document is also a problem), so I must use a C object to store this document, and share the pointer.

My problem is how to traverse this C object as lua tables efficiently.


Ah good, now we know the real problem. This looks like you are better off creating as much of the indexing/lookup information in C as possible (since this indexing will be shared and immutable, like the original dataset, it can be created once and shared by all the Lua VMs). Your problem is then how to project this information into Lua.

Since the dataset is shared, presumably you wish to maintain it in memory until the lsat Lua VM releases it’s last reference. This can easily be done with userdata using reference counting, as I noted in my earlier post. If, OTOH, this shared document survives until the entire process terminates (and hence, all the Lua VMs), then you don’t need full userdata and GC at all; the Lua VMs can assume that the dataset is valid since it’s lifetime exceeds that of any given VM.

What you are left with is a way to access individual entries within the document. That is, to access from Lua the index/lookup information associated with the shared document. You can do this either with a custom API, or indirectly via a metatable (which is really just an indirect way of using a custom API).

As to the structure of the index/lookup data, that very much depends on your application and the pattern of lookups. Typically you would want to pre-compute information for the most common lookups (to optimize for speed), but allow simple searches for infrequent lookups (to optimize for size).

—Tim