lua-users home
lua-l archive

[Date Prev][Date Next][Thread Prev][Thread Next] [Date Index] [Thread Index]

On Mon, Nov 23, 2015 at 11:16:58PM +0100, Philipp Janda wrote:
> Am 23.11.2015 um 22:46 schröbte Coda Highland:
> >And as for where resource management goes, C++ code involves writing
> >resource management ALL OVER THE PLACE. You have to pay attention to
> >scoping and you have to either manually free heap-allocated memory or
> >explicitly use a container that does it for you. That's a far cry from
> >keeping the code centralized.
> Manually free'ing heap-allocated memory is *not* RAII. To get the benefits
> you actually have to use it. C++ doesn't force you to use RAII (and it
> probably shouldn't -- it's too low-level for that), so you might be right
> about resource management in C++ code, but I still think I'm right about
> resource management with RAII.

You may need automatic, exception-safe destructors to implement RAII as it
was literally described by Strousoup. But an RAII-like pattern can be
applied to any language. Notice that in RAII you're principally concerned
with three things:

1) Making sure that the lifetime of any resource is bound to the lifetime of
an automatic, scoped variable, usually via a container (smart pointer, etc).

2) Per the namesake, you allocate necessary resources during initialization.
In C++ that means resource allocation (files, memory) should generally occur
in the constructor of a more abstract object.

3) Conversely, resource deallocation should generally occur [indirectly]
from destructors, not haphazardly from various methods (performance
optimizations notwithstanding).

The whole point of RAII is that if you want to remain exception safe, you
want to minimize the number of possible states your object(s) can have,
which in turns minimizes the number of critical regions and thus the number
of possible control flow paths you need to worry about.

If you keep this discipline, you'll find that almost all your resources are
neatly nested as a simple tree with its root being some controlling object
that your application is centered around--for example, a connection context.
What you shouldn't see often are a bunch of arbitrary resources like files
living directly on the stack. Admittedly, with automatic destructors you
don't need to be as strict in this regard, especially when using a container
library that already wraps low-level objects like file handles. But the
orginal purpose of RAII was largely a discpline for how to wrap low-level OS
resources into a C++ container in an exception safe manner. Doing everything
in constructors meant fewer places you needed to catch exceptions and
manually unwind the state of resources not directly bound to the managed C++

Thus, if one follows an RAII-like discipline in C or Lua, then the places
where you need to _explicitly_ call a destructor like :close should be far
fewer. Most destructors will be called recursively via other destructors.
Which means the _actual_ burden should be substantially less than one might
otherwise think in the absence of lexically scoped destructors. (This isn't
a solution to the problem. I'm just pointing out that the cost of the
problem can be substantially diminished.)

In C++, your constructors can theoretically instantiate subobjects on the
stack and move them to object members before the constructor returns. But
usually you don't: you assign them directly to the member fields. In Lua you
would do the same thing. Instead of

	local fh ="/some/path"
	local self = {}
	self.fh = fh
	return setmetatable(self, mt)

you should do something like

	local self = setmetatable({ }, mt)
	self.fh ="/some/path"
	return self

or maybe even

	local self = setmetatable({ fh = false }, mt) --> preallocate slot

That way if an exception occurs, the lifetime of fh is bound to self. You
don't have to worry about calling :close on all the subresources
independently, only self:close.

There are all kinds of ways to get fancy with this. And there are various
corner cases to worry about. But the real point is to try to keep resources
nested in a neat hierarchy so that you usually need only worry about
explicit destruction in a relatively small number of places outside
destructors/finalizers. There may still be gaps where a reference to a
resource is lost, but at least they'll be fewer. If you follow an RAII-like
pattern (early, non-lazy allocation of resources), then dealing with
resource exhaustion should be less problematic in general.

If you have a long lived application where resource exhaustion (memory, file
descriptors) is a less theoretical concern, presumably it's using some sort
of event loop. I often step the Lua GC regularly from a timer. That way
there's an upper bound on how long it takes unreferenced resources to be
reclaimed, regardless of load.