That's a good approach, given that GC performs naturally a loop of cycles that allows a object to not be swept immediately but collected again in the next cycle (after a new collect phase).
I think that what Sony L. wants is to be able to GC recursively within the same thread, so that a __gc metamethod can call any other function that could allocate new objects (so that it would potentially call a GC; however this call is blocked by the fact the thread has already has an active sweeping phase, so any attempt to perform another GC does nothing and the GC will have to wait for the next cycle).
I can imagine that there are cases wher such blocking may cause memory to never be deallocated beause each GC actually will increase the number of objects allocated at each cycle without being able to free any one.
This is not really lack of "reentrance" but lack of "recursivity" (which is already blocked silently as soon as the GC is invoked) which may cause problem (only if your __gc metamethods are doing very complex things where it would require allocating new additional objects that also need such __gc metamethod for their own finalization).
Reentrance on the opposite should not be a problem (unless there's some communication/synchronization mechanism between threads, that forces one thread to wait for the completion of the GC of another thread, in which case you would get a deadlock where one of the thread would have their GC to be blocked and doing then nothing in their own GC cycle, all being delayed for the next cycle).
On a single-tasking system where there's no multithreading or multiprocessing, but only cooperative coroutines, the GC becomes blocking if ever there's a need for recursion (but recursion is not possible from within the dedicated coroutine) and the application is naturally blocked as another coroutine which is not being resumed before the mark/sweep phase is complete and finalization has been properly applied without being delayed indefinitely to a never ending loop of GC cycles (which could cause significant CPU processing time without ever entering the thread in a idle cycle, except by forced pauses to do nothing).
There's not a lot of programs that use __gc metamethods for finalization. Basically most uses is for terminating I/O and freeing resources when I/O completes (i.e. it is for emulating async I/O, e.g. for network sockets, which don't necessarily run in a separate thread but in a coroutine of the same thread): for such use, there's normally no need to allocate new memory resources, iut's enough to "free" them by removing a reference, that will then cause the objects to be collected and swept in the next cycle without needing additional I/O, so the finalization is much simpler and can be delayed safely without creating infinite loops.
If your __gc finlization metamethods makes more complex things requiring allocation of new resources, in my opinion the program has a design bug: these resources needed to free objects should have been allocated wit hthe object long before it gets dereferenced and then garbage collected for finalization. Finalization should not allocate memory except for very simple objects that have NO such __gc finalization routine (such as strings or simple preallocated buffers/indexed arrays).
Such program can be modified to use a message loop based on timers to schedule worker coroutines for its actual work, and another helper coroutine for performing explicit GC cycles on objects whose finalization has been delayed and will be managed externally (not in the worker coroutines themselves): this emulates pseudo-threads (like those in old versions of Windows with non-reemptive kernels and cooperative programs via message loops and some priority lists).
True multithreading in Lua is not easy: we can create threads but there's little way to synchronize them cleanly (except by using external I/O): we don't have things like mutexes, and no critical sections for atomic operations for creating these mutexes or controlling accesses to shared variables and communciation buffers. And there's no scheduler we can control (all threads created in Lua are equal). Only coroutines in the same thread are easily controlable. Threads were added mostly to support applications on servers communicating with many independant clients (where no client can control what the other client does, each client having its own resources that can be freed at once in a single operation where all objects of the thred are instantly marked for finalization and finalization will then run in a tight loop).