|
|
||
|
Jerome Vuarand wrote:
As you both have said, it would on the surface appear as though I had two threads accessing one lua_State, or some other C/C++ corruption of Lua. However, I've put a huge amount of code to prove that wasn't the case (just doubting myself!!). Everything appears correct and coherent.2009/12/1 Javier Guerra <javier@guerrag.com>:On Tue, Dec 1, 2009 at 10:48 AM, Matt 'Matic' (Lua) <lua@photon.me.uk> wrote: My view is that the Lua VM is making assumptions about the L->base value. Generally, L->base doesn't change, so it holds a local copy to avoid indirection and speed up the LVM. In some opcodes, the VM knows that L->base could or is definitely going to change, so the call is wrapped up in the "Protect" macro which reassigned the local copy of base after completion. However, it would appear that there are 6 (IIRC) op codes that call dojump outside of the context of "Protect" and assume that L->base is not going to change. If you have one Lua "Universe", one lua_State per OS thread then that will be ok. Once you add OS-level threading and have a single Lua "Universe" - even if you use lua_newthread and strictly keep each lua_State in an OS thread - that assumption is no longer valid. Consequently, I have changed the "dojump" macro in lvm.c to now be: #define dojump(L,pc,i) { (pc) += (i); luai_threadyield(); base = L->base; } Now, I know that some "dojump" calls are also wrapped inside "Protect" and therefore the "base = L->base" is going to be duplicated in those cases. Of course, my optimising compiler removes the redundancy. Guess what - the problem has gone away and Lua is not failing its assertions anymore (and my Lua code isn't running off the rails)!! Javier - I reckon you adding the extra lock/unlock inside your C routines is probably greatly minimising the issue because you are reducing its probability, but you may well find it's the same one that I have and can be resolved completely with the dojump patch. Any thoughts or comments?? Matt |