[Date Prev][Date Next][Thread Prev][Thread Next]
[Date Index]
[Thread Index]
- Subject: Re: Proposal for a standard way of defining custom operators in Lua
- From: Lorenzo Donati <lorenzodonatibz@...>
- Date: Thu, 25 Jul 2013 13:51:00 +0200
On 24/07/2013 13.38, Coda Highland wrote:
On Wed, Jul 24, 2013 at 2:53 AM, Leo Romanoff <romixlev@yahoo.com> wrote:
[snip]
Now, I'm a big fan of overloaded operators -- I'm a shameless C++
junkie. I'd like to have operator overloading in Lua.
But I don't think the idea meshes well with your original rationale!
If you're talking about compiling some other language to Lua with the
rationale of taking advantage of Lua as a high-performance, low-level
target, then operator overloading is actually counterproductive. It
just adds an additional overhead to parse time, with no benefit --
you're ALREADY running the code through a compiler to generate the Lua
code in the first place, so why shouldn't you output the most
efficient code possible in the first place? The code is probably not
going to look like something a human would have written ANYWAY, so
operator overloading won't help with readability either.
So what's left?
/s/ Adam
Well, I have a mixed feeling regarding operator overloading.
First of all I only consider it useful to enhance readability, not for
"information packing". Moreover I think it enhances it "for the general
public" only if the problem domain in which it is employed has a
preexistent well-established algebra-like notation with well-known
semantics. In these cases I found it *extremely invaluable*: for example
applied to a bignum library or a vector-algebra library (or the possible
upcoming general Lua bit library).
For these application I usually find that Lua already does its job with
the current "overloading" mechanism (Yes, from time to time I wished for
some more flexibility, but no big showstopper there).
For a generic user defined DSL, where the above conditions are not met,
I don't think it is particularly useful. Yes, you *could* define a
particularly enticing infix syntax with custom operators in a problem
domain that has none, but it would be "yours" syntax, and everyone using
the DSL or the library would have to learn that. In these cases I find
functional/method notation far better.
Assuming the reader (and the writer too :-) has a reasonable (for a
given definition of reasonable) knowledge of English language and of the
specific problem domain, he can usually guess the *meaning* of an
operation just by looking at function names (of course he has to read
the docs to master the tool, but the visual memory helps when reading code).
Compare these silly examples in pseudo-language:
-- functional version
c = new Collection()
c.put(new Foo())
c.put(new Bar())
y = c.find("pluto")
-- operational version
c = {}
c <- !Foo -- assuming "!" is the operator "new"
c <- !Bar
y = c ? "pluto"
Which one would be more readable to a novice?
Moreover, as the number of new operational *syntaxes* grows (especially
in the same project - maybe using different libraries with different
DSLs syntax), the user will have more chances of getting confused (as
anyone knowing many programming languages knows too well!).
As a programmer I'm OK with having to browse the docs to find the
*meaning* of a function/method, because usually this happens once in a
while (for the same name), since the name itself usually reminds me of
the meaning, but I know I *will* forget what that pesky *$? symbol means
unless I type it hundreds of times, and this will force me to re-read
the docs over and over just to say "ah, yes, yes... that meant that"
(remember I'm talking about problem domains without established
operational notations).
The meaning of symbols outside very specific problem domains is too
culture-dependent. Anyone reading a new uncommon symbol could interpret
it in a different way. E.g. take a Pascal programmer and ask her what
does the symbol ^ mean. I had a student just this year (I work as a high
school teacher) who was surprised to learn that in C++ that symbol had a
completely different meaning (she learned the hard way: she used ^ and
the program for the assignment "worked", i.e. ran, but then "funny"
things happened :-). As another example: to a C programmer would the
symbol <- remind more of an assignment or of a pointer operation?
Using names (more so if in English) adds redundancy and helps memory,
and thus readability, in the general case (IMHO).
Yes, choosing good names is more an art than a science, but choosing
good symbols sometimes is akin to witchcraft!
Cheers
-- Lorenzo
--
() ascii ribbon campaign - against html e-mail
/\ www.asciiribbon.org - against proprietary attachments