lua-users home
lua-l archive

[Date Prev][Date Next][Thread Prev][Thread Next] [Date Index] [Thread Index]

On Thu, Jan 15, 2015 at 09:54:02PM -0500, Rena wrote:
> Personally I've sometimes wished Lua had try/catch[1] or some kind of
> standard error/exception object, because I find the current method a
> bit awkward:
> -If you throw an error, then the caller needs to use pcall or xpcall
> to catch it, which adds overhead (and ugliness with pcalls littered
> everywhere) and especially is more complicated in C, and can make the
> execution flow hard to follow.

How do you square the wish for try/catch with your concern for the overhead
of pcall and the mental burden of tracking non-local jumps? Try/catch syntax
would just be syntactic sugar atop pcall. And you couldn't benefit from the
syntax from C code.

Arguably it's not hard to get nearly the same _feel_ of try/catch by using
something like:

try(function ()
	--> try code 
end, function ()
	--> catch code
end, function ()
	--> finally code

When you're typing f-u-n-c-t-i-o-n all day long, the additional verbosity of
that approach versus a braced-syntax is that significant. If Lua has taught
me one thing, it's to be less concerned with verbose syntax. (I'm very
interested in Rust, but its terse keywords look abhorrent to me compared to
Lua. That's a preference I thought I'd never develop.)

And if your functions don't rely on upvalues (you can build try() so that it
passes parameters to the anonymous functions), you're not actually
instantiating a new function object every call. All you're left with is
function invocations, which would probably have the similar cost to how the
try/catch/finally would be implemented at in the VM anyhow, at least in a
language and implementation like Lua.

The above is far from ideal, but the fact that almost nobody implements such
a pattern suggests that the benefit of try/catch/finally isn't as great as
people claim. If it were so awesome the above pattern would be much more
common in the Lua ecosystem, despite the ugliness. Rather, I think people
(and I always include myself in that group ;) are more concerned than they
think with how the code looks than with how it works in practice and with

> -If you return (nil, errmsg), then the caller has to examine the error
> message string to determine the issue, which is pretty ugly.

Except the typical answer to that is using the (nil, errmsg, errcode) tuple,
as you mentioned below (snipped). FWIW, in one of my libraries I tried to
dispense with errmsg in favor of just returning (nil/false, errcode), and it
definitely didn't work out well. It was both a premature optimization and
over-simplfication. Even where I heavily make use of the errcode, there are
invariably certainly errors that can't be handled locally, in which case
most of the time I end with code to generate an errmsg, whether to throw or
to log. Even a single line adds up when it needs to accompany every spot
that an error is checked. Those lines aren't executed very often, but they
still take up the same amount of source code space whether they're run 1% of
the time or 100% of the time.

There's definitely no easy answer to handling errors. IMO, it's all too
context-specific to make general claims. Regardless of one's preferred
abstraction, real world code that isn't batch-processing tidy datasets needs
to interface with and interoperate with other code, including the operating
system. The least common denominator is simple integer error codes, but in
any event it needs to be relatively convenient to communicate and translate
errors and error types across interface and component boundaries.

The only unequivocally useful approach I've found is to try to keep code
which can fail as localized as possible. And to design my code so that the
vast majority of the logic is organized and implemented as routines which
have no failure mode. This works equally well in C, C++, Lua, and every
other language, regardless of the error handling strategy. You don't need to
worry about errors when there can't be any ;)

(Note that in C I always try to handle out-of-memory and other errors
gracefully--i.e. without exiting the entire process, but only failing the
particular task or job. Localizing errors is not the same as ignoring
errors. But it requires not only thinking through program flow, but also
data structures. It's becoming something of a lost art to tailor one's data
structures to the specific task at hand--it's decried as reinventing the
wheel, but IMO that's a gross misunderstanding--yet it's absolutely
necessary when simplifying code, particularly for minimizing errors and