lua-users home
lua-l archive

[Date Prev][Date Next][Thread Prev][Thread Next] [Date Index] [Thread Index]

On 4/15/07, Thomas Lauer <> wrote:
Rici Lake <> and...
> David Given wrote:
> > It occurs to me that a reasonably elegant way of implementing non-trivial
> > syntax modifications to Lua would be to have a totally separate, off-line
> > compiler --- most likely written in Lua --- that reads in a Lua chunk,
> > transforms it somehow, and then outputs it again as vanilla Lua source that
> > can be passed to the real interpreter.

I have toyed with that idea as well, using Lua as a sort of backend for
an extended version of itself. The biggest hurdle I faced was to make
sure that the "pre-compiler" (which was admittedly a rather simple
affair) spits out 100% correct Lua code. Otherwise the diagnostics are

I really think that the right interoperability levels are:
- unprocessed sources (which implies that the sources must contain the references to all extensions they might use)
- bytecode if you know your target architecture.

If you really need an universally portable format across machines (although I don't think that's a strong need, except maybe for an erlang-like concurrency framework), then compile with large number types, and rely on a bytecode converter such as chunkspy to ensure protability w.r.t. number types and endianness.

And when it comes to interacting between the basic compiler and the extensions, or between different extensions, the right abstraction level clearly is AST: you want to handle statements, blocks, expressions as autonomous entities; you want to be able to shuffle them, restructure them, use them as table keys, insert them into one another, transform them in non-local ways...

As frequently mentioned here in macro system discussions, compile-time extensions aren't worth it if it's only for some trivial syntax sugar such as += operators: giving up an interoperable standard syntax only makes sense to support different ways of thinking, and such advanced stuff aren't practical to capture at the token stream level.

The main issues with the dataflow extended_lua --> plain_lua --> bytecode are:

- aesthetically, it feels wrong: source formats ought to be for users to read and write, not for computers. And it adds many steps that don't make sense in the compilation process, except historically: that's not the Lua Way.

- error handling is a nightmare. It's already non-trivial in a proper meta-compiler which supports runtime extension (it is mostly unaddressed in the published versions of metalua, and not completed in the development version). But if you add a "generated source" layer in the middle, it will be very hard to generate proper messages upon compilation errors, and even harder for runtime errors. It will be really hard not to get confused between the official vs. hacked grammars. Maintenance will be too hard to be done correctly in practice. Try to use m4 as a C preprocessor, even on a tiny project in the hundreds of lines, if you want to know how it would feel.
- you ought to have a standard representation of ASTs in extensions anyway: it keeps them reasonably readable and maintainable, saves huge amounts of work, allows different extensions to cohabit together. From there, it's much easier and cleaner to have the bytecode compiler read ASTs, rather than:
  * generating plain sources in a robust way,
  * adding enough checking to be 100% sure there's no possibility to pass uncompilable code to the "real" compiler
  * find a hack to sync debug informations with the unprocessed sources
  * figuring out that different extension hacks simply can't work together
These steps essentially consist into hacking a second compiler in front of the first one: that's a *huge* design smell!

Also I have not hit upon a clear, standardised mechanism (other than
pipes) to interface between the pre-compiling phase and the Lua backend.

Exchanging table data in memory works very well. If for some reason you want them to run in separate processes, you can marshalize them. There's no such thing as a free lunch here: if you want your backend to interface nicely with other programs, you have to design its interface in a program-friendly way.
> There are a number of useful and
> interesting constructions which can be implemented by generating VM
> code, but cannot be implemented (efficiently) as source code transforms.

If "source code transforms" means a thin layer of search-and-replace,
you're probably right. But a fully-fledged AST compiler might do a very
good job of optimising things. And even if some features are not
compiled as efficiently as Lua could do them, other parts might be much
more efficient, exactly because an AST-based compiler can do things that
the fast Lua compiler can't.

I've been thinking a lot about implementing the AST->bytecode transformation through properly reified ASM code. The dataflow would then be:

concrete syntax --> high level AST --> (high-level optimizations) --> ASM AST --> (local optimizations) --> (dump).

The bytecode generator in metalua is hackish and scheduled for rewrite at some point, and this is one of the options I consider. The alternative being to write this BC compilation stage in C, based on lparse.c. I tend to favor the latter solution, mainly due to manpower limitations, but they're not mutually exclusive...

Still, an officially sanctioned VM assembler would have some appeal.

Definitely. On the other hand, for Lua's official development team, blessing something is not to be taken lightly: it implies some commitment to long-term compatibility, including with design errors. The main reason why Python is much messier than Lua is that they're tied by a lot of stuff they've previously blessed.