lua-users home
lua-l archive

[Date Prev][Date Next][Thread Prev][Thread Next] [Date Index] [Thread Index]


There are also memory-mapped files. The access control and synchronization across processes being offered by the hosting file-system, each process of thread can get a consistent view. But you have to use filesystem's locking mechanisms for atomic operations.

Mmap'ed memory is very fast (much more than conventional file I/O as they are implicitly buffered for at least the size of your mapped file segment).
Caveats:
* if you have to work on very large datafiles, moving the window mapped in memory at another location of th file would destroy your buffer and would consume lot of I/O (or the default shared cache of the filesystem) just to refill it, and would require adding large VM space to the process. If you have many threads doing this in the same process, the process memory may explode.
* exclusive file locking (with the filesytem calls/API) does not work across threads, unless the OS provides isolation level with calls/API at thread level (and Lua states are not necessarily mapped to a native thread); inside the same Lua app, you will need other locking mechanisms from the Lua machine itself (across its "light" threads). Using data serialization is still the way to go to avoid dead interlocking situations for atomic operations using locks in random order 
* the last alternative is to use an external database (or a mcached store for its speed). You just need a connector library to connect to the "remote" database or store.

And be aware of possible breaches of privacy or security on caches (i.e. implement a cache eviction policy, using segregated pools, instead of using simple LRU-based eviction; this is true for all sorts of caches, including DNS client caches, web caches in browsers, or in routers; not also that the file system caches are NOT secure by default as they rarely provide a cache evition policy with segregated pools you'd want; not doing this expose your online services to data leaks, without knowing secrets in advance).

Unfortunately all modern computing devices, OSes, drivers and application softwares, and most websites you visit are using many levels of caches which are not secured at all (but not in control by the clients using them), most of them using basic LRU eviction policies (there may exist some segration in multiple pools, but no way to segregate them in application-controled domains, as the subdivision is most often arbitrary, only optimistic, and only tuned for best average global performance, and not at all tuned for security. Those breaches are massively harnessed by advertizers (to abuse our privacy), and bad hackers to steal secrets and then money, or to gain access to sites even when they are secured by the best firewalls, the best encryption/authentication/quota mechanisms or other isolation mechanisms (threads, processes, process groups, containers, virtual machines...) of the OS (possibly implemented by hardware in CPU/GPU/bus controlers, SSD/HDD, all of them having some caching mechanisms with too basic eviction policies as they are clearly optimized optimiscly, only for speed and global average performance).

For now the best solution is to use multifactor authentication, but it's not enough as attacks also exist across authorized users of the system which are insufficiently sandboxed).

Caches are the worst nightmare in all modern architectures, we highly depend on them for modern performances. So it's very hard to isolate them all, and to define and implement the correct eviction policies. without sacrifying a lot of performance or adding lot of "idle" redundancy to the system to secure. Even if you do that, you'll pay the huge price of energy, and power saving strategies will ruin all your efforts because you'll reintroduce variable latency for conditional on-demand wake ups, which are also a form of cache (except that there's little of no segregation at all, the system is sleeping or awake and offers no application-controled separation of domains) ! And even today, we continue to train people with basic LRU mechanism or never teach them to make them constantly aware about the risks of ALL caches.


Le sam. 27 févr. 2021 à 14:22, Viacheslav Usov <via.usov@gmail.com> a écrit :
On Sat, Feb 27, 2021 at 5:29 AM caco <cacophonitrix@protonmail.com> wrote:
>
> From C I have master process in parallel with number of agent processes each binding a luaL_State. I want the agents Lua scripts to communicate through Lua supervised (i.e. gc'd) data. How can I do this?

Since you said "process", not "thread", it should be said that in a
modern OS such as (a recent version of) Linux, MacOS and Windows,
distinct processes have isolated memory spaces and cannot touch memory
in another process, with one exception.

Without the exception, your only option is to serialize and
deserialize data as byte chunks and use some IPC to transport the
chunks between processes.

The exception is shared memory. With it, you get all the complications
of multiple threads and then some more. Depending on how badly you
want your thing, this might be something to consider.

Cheers,
V.