lua-users home
lua-l archive

[Date Prev][Date Next][Thread Prev][Thread Next] [Date Index] [Thread Index]


Kaj:

On Wed, May 29, 2019 at 8:34 AM Kaj Eijlers <bizziboi@gmail.com> wrote:
> I am confused what the gain is of running them on other threads if you have to lock the main object (the Lua state). Wouldn't you end up with 99% of critical path inside mutexes and thus gain nothing from threading it since each task will be blocking on the next? (and if the answer is 'no, because the tasks do more than the lua access - wouldn't a message queue to the main thread suffice?).

I do not want the other threads, but the engine I'm extending has
them. It has messages, which can be queued ( fire & forget, ignore
reply, sending thread goes on ) or dispatched ( sending thread is
blocked waiting  for reply ). Messages are dispatched in different
thread pools, typically because they are messages which are queued and
benefit from having their own thread pool, but sometimes a message
needs to be sent ( and waited ) from one thread to one of these thread
pools, and the sending thread needs to block waiting for the reply.

So, my problem is thread A enters into lua state S1, does some lua
things and calls a Cfunction which calls the C core, which in turn
sends-and-wait a message M1 which is handled by a queue served by (
amongst other ), thread B, which, to handle  it,  due to
configuration, needs to enter state S1 ( because some API stuff is
hooked to be served by lua code ).

If, instead of using lua, I use C++ the thing is easy, I just insure
the data structures are coherent and unlocked before letting A call
the core.

In lua I can do several things:

I can rearchitect all the code so each message is handled by a
different state and I use a thread-safe data structure for all global
data. This leads to complex lua code, I've made my estimations and
it's easier to just use C++ for the extensions.

I can use a dedicated thread with a dedicated message queue for S1, so
A does not enter S1, sends it a message, and S1 does not call the api,
it sends a message and waits for a posted reply. This is complex, some
comment as above.

What I want to do is try a simple thing, lock the state on A before
first entering and, On the Cfunction which calls the api, grab
parameters, lua_settop(0), unlock, call api, relock, push results,
return., then do some more lua things, exit, unlock. If B enters it
finds the state unlocked and can lock it, and the core, which is
written in standard C, should not be able to distinguish it is being
called from a different thread ( it should be the same as if thread A
has called into it again, which is what would have happen if the
message M1 were not configured to be handled by a dedicated thread
pool ).
I leave the state in a coherent view, and the internal data structures
are correctly set up, and it works nicely when the called API does not
need to be served by another thread pool.

>>From my examination of the sources lua is suppose to be stable if I
just define lua_lock/unlock, but this is a mess, as I do not want
threads concurrently entering at any time and I would need another
high level lock. But I've seen that lua calls Cfunctions doing
unlock(),($f)(..),lock(), so I think my approach is "safer". ( i.e.,
every code chunk which is locked by the lock/unlock approach is locked
by mine ).

> If not, can critical data be stored in a blackboard-like structure or transactional memory?

Yes, it can, but the added complexity is not worth it.

Francisco Olarte.