There's a fundamental difference even if the effect looks the same: the loop is not supposed to leak memory, but nothing is garbage collected. and most CPU time will be used by the garbage collector trying to finalize more and more objects without success, not in the Lua program itself but within the Lua virtual machine itself becoming completely out of control and probably crashing even before it can use the panic function.
This should not occur if there was a way to set safe limits to the allocations a "thread" (in fact just a coroutine) can do: only the single thread would run out of limit, would be affected by error() or nil objects being allocated to them, the other threads would run unaffected. No panic event would occur. And a parent thread could detect these situations, and terminate only that offending thread and could take other preventive measures (by being able to detect the origin of this thread).
This is something needed for Lua programs that run threads to implement a web service, or simply to be able (like in _javascript_) to use Lua as an hypervisor to virtualize a complete OS: there's a need to be able to track and restrict resource usages in each thread (this is also needed to protect against Meltdown-like time-based attacks targetting the Lua's allocator or garbage collector whose execution time becomes extremely high and thus easily exploitable to create very effective "spying" side-channels).