[Date Prev][Date Next][Thread Prev][Thread Next]
[Date Index]
[Thread Index]
- Subject: Re: How far should one go preventing potential memory leaks?
- From: William Ahern <william@...>
- Date: Tue, 10 May 2016 14:32:22 -0700
On Mon, May 09, 2016 at 10:37:40AM -0300, Roberto Ierusalimschy wrote:
> > I would like to open a discussion on best practices for preventing
> > memory leaks in C programs using the Lua API.
<snip>
> > My personal take on this is, btw: I don't care as I long as I don't
> > reference NULL pointers. If we are under memory pressure and functions
> > like lua_pushstring() are starting to fail, we will be in much deeper
> > trouble anyway soon... ymmv, opinions very welcome.
[To OP]
It depends. If your process is servicing thousands of WebSocket connections
streaming multimedia, most of which have reached a steady state, do you
really want the next request that comes in to kill the entire process
because it couldn't allocate a string? All of those tasks are often easy to
isolate conceptually and in the implementation.
OTOH, in some GUI applications there may be little difference between
failing a particular request and crashing the entire application from the
user's perspective. And isolating tasks can be difficult when dealing with
shared, complex data structures. Though, FWIW, people using Photoshop might
be rightfully upset when the whole application crashes because a particular
transformation ran out of resources. Yet something like Photoshop is
superficially an obvious candidate for crashing on OOM.
iOS and, I think, Android offer a third-way. You handle every user request
transactionally because the kernel might kill you at any time for any reason
whatsoever, or no reason at all. That works especially well for single-user
GUI apppications, but very poorly for highly concurrent network daemons.
My rule of thumb is that libraries should handle OOM gracefully and bubble
up the error. Policy on OOM is the application's preogative, and libraries
should work well regardless of the policy.
> I think both options are valid. If you are writing a public library,
> I would go the first way; often this userdata has some useful meaning
> to be exposed in Lua. As an example, streams (FILE*) in the I/O library
> follow this pattern.
>
> For your own code (not to be used by others), the second line is fine,
> too. Even if your pogram does not crash under huge memory pressure,
> Linux will crash it for you anyway...
You can disable overcommit, and it's not terribly uncommon to do that. I
disable overcommit on my Linux servers, and don't provision swap on any of
my servers. I don't think it's possible to avoid the OOM killer entirely on
Linux, but once all privileged processes hit a steady state I think the
threat is mostly gone and you can rely on OOM to provide back pressure for
managing request load. (IMO there are parallels with overcommit and the
phenomenon of Bufferbloat. In both cases engineers attempt to maximize local
performance by aggressively relying on memory, but the overall result is
poor QoS globally because the larger system doesn't benefit from back
pressure mediating resource utilization in a fine-grained manner.)
Plus, regardless of overcommit, allocation can always fail because of
policy, such as a resource limit set by the administrator, supervisor
process, or application. Linux has fairly rigorous memory accounting for
userspace processes as part of its security subsystem. AFAICT examining the
kernel code, even fork can fail because of a memory limit.
I agree that if it's your own code and an environment you control, it's
certainly a legitimate strategy. But correct handling of allocation failure
is not something easily added after the fact.