lua-users home
lua-l archive

[Date Prev][Date Next][Thread Prev][Thread Next] [Date Index] [Thread Index]


I need to clarify i suppose, I did/do not intend to access the thread from the co routine.
I only am trying to decide how best to get my refID in the cleanup situation.

Imagine the code:

int iRet = lua_resume(pLuaState);
if (iRet != LUA_YIELD)
{
    // the thread is dead, must unref it to let GC do it's job.
    // how do i get the refID?
}

What I'm trying to avoid is having some c side map between lua state pointers and ref id's.
It make little sense to me that i can't somehow store that along with the co routine so that i can avoid the lookup and management of that variable size data structure.

This is why I tried storing the refID on the co routines stack.
However was thwarted by errors in the lua script leaving the stack in a function scope such that i couldn't get my value.

After your response I tried a weak key table with both the key and value being the thread, in this way i could set the thread to nil and GC would do the rest.
This was also thwarted by errors, since the only way i can see to push a thread onto the main state stack to set it nil in the thread table, is to call pushthread on the co routine(which is dead hence the problem) and then move it to the main state stack.

I don't understand why the c api dosn't support:
      lua_pushthread(L, LThread)
but i'm not out to change lua so this is only a curiosity.

I really hope this has been more clear, I think my c/c++ is better then my English and Lua for that matter.

- Chris

On Tue, Dec 1, 2009 at 5:04 AM, Jim Pryor <lists+lua@jimpryor.net> wrote:
On Mon, Nov 30, 2009 at 11:07:36PM -0500, Jim Pryor wrote:
> On Mon, Nov 30, 2009 at 03:52:37PM -0800, Chris Gagnon wrote:
> >    Since ref and the corresponding rawgeti are the fast methods for table
> >    access i use the following code to create myself a new co routine.�
> >
> >      lua_getfield(MainState, LUA_GLOBALSINDEX, "ThreadTable");
> >      lua_State *thread = lua_newthread(MainState);
> >      int refID = luaL_ref(MainState, -2);
> >      lua_pop(MainState,1);
> >
> >    So now the when i want to let GC clean up the coroutine i need to :
> >
> >      lua_getfield(MainState, LUA_GLOBALSINDEX, "ThreadTable");
> >      luaL_unref(MainState, -1, refID );
> >
> >    The remaining piece is where do i get refID when i want to unref? or how
> >    do I store that refID with the co routine?
> >    Here are some of the possibilities/issues i have run into.
> >
> >      * Environment table
> >        I have multiple co routines sharing an environment which dosn't allow
> >        me to uniquely store this value,
> >        without another table which is undesirable from a
> >        complexity/performance standpoint
> >      * Stack
> >        Since the new stack is the only thing completely unshared when
> >        creating a co routine, I simply push the value onto the stack.
> >        This works like a charm until an error case, errors leave the stack in
> >        the layout of the function that ran into the problem.
> >        In this case i do not know how to recover the refID to properly clean
> >        up.
> >
> >    Thoughts about my specific issues?
> >    Other suggestions/approaches that i haven't mentioned?
>
> Can you control the body of the coroutines? If so, pass the refID to
> them in their opening argument list, and use it to immediately create an appropriate
> __gc method which you attach to some userdatum guaranteed to live as
> long as the coroutine does. You don't need to
>
> Does that suit your needs? If so, then your problem reduces to how to
> ensure that the userdatum lives for just the length of time you want.
>
> You could make your coroutine bodies look like this:
>
> function body(refID, ...)
>     local userdat = newproxy(true)
>     getmetatable(userdat).__gc = function(self)
>             luaL_unref(ThreadTable, refID)
>         end
>     local results = { pcall(... rest of coroutine body ...) }
>     userdat = nil
>     return unpack(results, 1, #results)
> end
>
> There are probably more elegant solutions (for example, one dumb thing
> here is you've got two levels of pcall: first every coroutines runs
> inside one, second you're explicitly invoking another one. It'd be more
> elegant to find a solution that involves only one level of pcall per
> thread. But this may point you in a useful direction.


I find the example the way you've described it somewhat bewildering. Why
do you think your threads need access to their own refID? Your main
thread should know when the subthreads have gone dead or stopped with an
error, and can then remove them from the ThreadTable and tell its own
data structures to stop using that refID. The threads won't get __gc'd
any earlier than that because the ThreadTable keeps them alive. Even if
there *were* some good reason to have the threads do their own cleanup, it
wouldn't be enough for them to remove themselves from the
ThreadTable---they'd also have to tell the main thread data structures
that that refID was now invalid (and might get allocated to a new thread
before they next try to use it). So as I said, it's somewhat bewildering
why you'd want your threads to be using this particular piece of local
data.

But stepping back and just addressing your problem of how to find an
extra place to stash local data without creating a new table for each
thread, I can think of two natural ways. First is to pass the data to
the threads when they're first called, as I describe above, and thereafter store it in a
local variable or upvalue for the thread.

Second is to create a single additional table (per piece of local data), use the threads themselves as keys
and the different local values per thread as the table values. This
single table can be stored in the shared environment all the threads
have access to. They can get themselves by doing coroutine.running(). If
the local data needs to be used often and you want to avoid the hit of
the lookups, then look the data up once and save it in a local variable
or upvalue.

--
Profjim
profjim@jimpryor.net