[Date Prev][Date Next][Thread Prev][Thread Next]
- Subject: Re: syntax heresy
- From: Uli Kusterer <witness.of.teachtext@...>
- Date: Fri, 12 Aug 2005 18:39:21 +0200
On Aug 12, 2005, at 9:17:48, Boyko Bantchev wrote:
Having read your postings, I feel that I should answer
some of your criticism, and, conjointly, provide a rationale
for what I described in my initial posting.
I guess the "criticism" part refers to the other posters' messages
you also addressed in this message. I tried *very hard* to sound as
neutral as possible. :-)
First of all, I did not mean my proposal to be, as you might
have thought, the beginning of some more ambitious work of
mine. It was really about a simple-minded preprocessor that,
I think, allows me to create Lua programs in a more rational,
succinct and readable way.
More rational and readable in your opinion? Fine with me.
I wanted to receive some opinions, which I did. I also hoped
to hear about more or less similar work being done, or having
been done already. In this I (almost) failed, but I will
Well, opinions you got. The problem seems to be that most people
just didn't share your opinion :-)
Most of them got pretty far, than
realized that they wanted to do more than this solution
would allow. As you seem to be using the same technique
(at least that's what some of your statements sounded
like to me), I didn't really have much hope for this
As I said, I din't want to get anywhere else. What the
proposal aimed at is what it already does. I may change
some details, or even redo the whole thing in awk, in order
to deal with the problems I mentioned, but that is all.
By "getting anywhere" I was mainly referring to the "problems you
mentioned". You wrote how certain parts of the code didn't yet
translate right with your search/replace-based approach. My friends
(at the PXI/CHASM/WildFire/Sphere project, in case you want to try
finding out more about their experiences) tried to do something
similar with HyperTalk. Admittedly, HT is much more free-form, but
nonetheless most modern programming languages require you to have *at
least* a full tokenizer for the language to correctly translate. In
addition, languages like HyperTalk have many tokens whose behavior
changes depending on context. The pathological case pertaining to HT is:
put "hello" into buttons
if the number of card buttons = 15 then -- number of single card
named "hello", represented by variable "buttons"
if the number of card buttons = 15 then -- count the number of
all buttons on the card layer of the current card
Okay, this may be an example that would in reality be better served
by actually reserving the identifier "buttons", but I couldn't think
of a better one right now. Anyway, most languages have similar cases,
and if you don't build a full parser, it's easy to screw up or cause
oddities. SuperCard (www.supercard.us), for instance, supports "=
not" as an alternate spelling for "is not", which, IMHO, is simply
wrong, and stems exactly from solving token alternatives at the
tokenizer level instead of in the parser.
On the other hand, I am interested in learning about
relevant work of other people, if there is such.
Sphere has a Yahoo Group. You could also check out the Compilers101
and QDepartment Yahoo groups. The latter two are compiler-design
mailing lists with a lot of overlap, so I don't know which to recommend.
Let me just finish by mentioning that, while these symbols may be
faster from eye to mind, that only works as long as either the
symbols are intuitive to you (which typically is the case when you
invent it yourself or when you use something regularly for years).
The problem is that a lot of source code is written by or worked on
by several people, and in that case, anyone else looking at the code
will first have to learn your shortcuts. The effort and time spent
learning (seemingly) arbitrary shortcuts is typically much greater
than the additional time it takes to read a full word, especially
since most adults read entire words and not single characters (they
recognize a word by a basic shape, and only actually read each
character when it doesn't fit in the sentence).
The time spent writing long identifiers is negligible in this case,
since typically code is written once and read a couple ten thousand
times or so. Modern programming languages (and the better maths
textbooks) are honed to find the best compromise between being short
and concise and still readable without having memorized everything in
it. So, I'm just not seeing myself something this extreme in the
"concise" end of the scale that requires me to hire a specialist to
read it if I can't maintain it myself anymore.
-- M. Uli Kusterer