lua-users home
lua-l archive

[Date Prev][Date Next][Thread Prev][Thread Next] [Date Index] [Thread Index]


2009/8/6 M Joonas Pihlaja <jpihlaja@cc.helsinki.fi>:
> On Thu, 6 Aug 2009, Jerome Vuarand wrote:
>> The shared data is
>> stored in a shared Lua state. The lua_lock mechanism ensures that my
>> two (or more) threads cannot corrupt the state while accessing it
>> concurrently. However some manipulations of the shared state imply
>> calling several functions of the Lua API, and since between each call
>> the state is unlocked another thread could have messed with the stack
>> state.
>
> Yow.. this sounds dangerous!  Usually I'd advocate against using the
> same lua_State directly from multiple threads even if the VM has a
> sane lua_lock/unlock implementation.  I think the intended mechanism
> for shared access is for threads to access the same global state via
> related lua_States created by lua_newthread().  This approach gets
> around the problem of The Stack State being messed up by separate
> threads by not having a single stack state at all.

I'll try that. But that doesn't impact the potential usefulness of the
proposed patch.

>> For that reason I call lua_lock in my library, and made sure recursive
>> calls to lua_lock were ok.
>
> OK, so I guess this means concretely that your lua_lock is twiddling a
> recursive mutex, right?

Kindof, I'm using a Windows CRITICAL_SECTION. But that's a detail that
doesn't matter for the need described in my original post.

>> Additionnally, I've added two functions called lua_lock and
>> lua_unlock, declared in lua.h and implemented in lstate.c, which
>> simply call luai_lock and luai_unlock respectively.
>
> One problem I see with exposing lua_lock()/unlock in the official API
> is that the current implementation is carefully balanced to work with
> a non-recursive mutex on the assumption that only the Lua VM can call
> lua_lock()/unlock().  In particular it assumes that it can pass
> ownership of the lock from one level of lua API calls to another level
> across longjmps regardless of how much user C code it's jumping over.
> Exposing lua_lock/unlock() via the Lua API might cause problems on
> longjmps across sandwiched user C code which is *also* holding the
> lock because the user code is never given a chance to release its
> lock.  In the worst case it could lead to leaked lock references and
> deadlock depending on how exactly it's implemented and what the lock
> ownership policy is.

Of course the user would have to properly balance the locks and
unlocks, but that wouldn't be the first thing to take care of while
using the Lua C API. And since the actual content of lua_lock and
lua_unlock is defined by the user anyway, he can do what he wants with
it.

> One approach to allowing recursive holds on the same mutex with
> lua_lock()/unlock() might involve the user establishing a stack of
> handlers to call before longjmping so that they can do scope-exit
> cleanup such as releasing their lock holds.  A simpler alternative
> might be for the VM to directly keep track of the number of recursive
> holds on the same lock and use the appropriate number of lua_unlocks()
> to keep the lock balanced when catching a longjmp.

This looks like a big and complicated change. I don't need a coffee
machine, only to slightly improve the Lua C API. I can already
implement lua_lock as a function, let Lua source code mostly unpatched
and still call lua_lock from my user code. My proposed patch just
reduces the overhead of locking in situations where it's applicable,
without adding or removing any other feature from the existing Lua
code base.