[Date Prev][Date Next][Thread Prev][Thread Next]
- Subject: Re: Thread Safety in Lua, part 1
- From: Mike Pall <mikelu-0508@...>
- Date: Sun, 21 Aug 2005 15:55:02 +0200
Rici Lake wrote:
> In theory, you can define lua_lock and lua_unlock and then execute
> different Lua threads (from the same context) in different (OS)
> threads. That's what Diego's luathread module does. But there are still
> a number of issues.
Apart from the unsolved locking problems you mention, there
is one bigger issue with this threading model: the overhead
of lua_lock()/lua_unlock() may be substantial.
This is because it's used for almost every C API transition point.
Just starting up Lua will cause more than 1100 locks+unlocks.
And that's only for defining the Lua library routines ...
I once tried to resolve this by adopting the Python GIL model:
Keep the lock during C API transitions and only explicitly
release the lock for blocking I/O operations. Python releases
it in regular intervals, too (bytecode counter). But this makes
it a lot harder to use because you cannot rely on locked
semantics even for simple expressions (i.e. you need explicit
shared data locks/unlocks).
Ok, so this greatly reduced the number of lock transitions.
But every library that's doing some kind of blocking operation
needs to be aware of that. This is relatively simple to do with
the Lua core libraries, but may get messy with add-on libraries.
And there are a few blocking points left you really can't
control easily (like page faults with mmap).
Anyway, I gave up on this model and used either the 'one Lua
universe per native thread' model or a pure coroutine model.
The latter can be spiced up with native threads, as long as
these are used on the C side only: e.g. run a blocking C library
call that cannot be converted to a non-blocking one in a native
thread, yield the current coroutine and make a scheduler resume
it when the native thread signals that the library call finished.
This solves many of the problems with the other approaches.
Most important for me was to avoid 'mutex hell', i.e. no more
shared data locks/unlocks.