lua-users home
lua-l archive

[Date Prev][Date Next][Thread Prev][Thread Next] [Date Index] [Thread Index]


I've thought along these lines as well. It's quite nice when you realize that if you compile your sources, you're done with the filtering and can use standard lua to load your binary chunks.

So what I'm looking at now is how to compile lua sources to a format that is different from the one that's doing the compile (to save resources in the target environment, where token filtering might take to much resources to be feasible). At least with the kind of token filter toolset I'm looking at, where the complete BNF syntax is compiled and used as a base for new filtering rules to be compiled in at runtime.. it's great, but I've not finished the match between compiled bnf and the token stream yet.. ;) stay tuned

//A

Asko Kauppi skrev:

In a way, yes. But token filtering isn't exactly as powerful as you make it be. It does not create bytecode, it simply converts tokens, at places. Like so:

source code --> llex.c tokens --> token filter(s) --> tokens to lparser.c --> bytecode

Currently, lparser.c still has big things to say about the program, s.a. what for loops are, etc. On the other hand, nothing prevents one from making a 'compiler' for any perceivable syntax, and simply using the Lua bytecodes as its runtime. For me, the Lua parser suffices. :)

-asko


Mike Panetta kirjoitti 9.11.2006 kello 22.52:

Isn't there already a 'core' here? Could you not call the VM the 'core'? I mean, well all your token filters and such could be seen as compilers that output lua binary chunks, and they could just be loaded as is. It could even be possible to write any arbitrary syntax you wanted and compile it down to LUA byte code would it not?

Mike

On 11/9/06, *Asko Kauppi* <askok@dnainternet.net <mailto:askok@dnainternet.net>> wrote:


    I did.  And started thinking something else, thinking oh boy the
    Brazilians will reduce soooo much.. ;)

    - object syntax
    - anything marked "syntax sugar" in the reference ;)

    But actually, I don't think the telescope should be held that way.
    Reason is, token filtering will always add onto the loading speed,
    even if the modified features weren't even used by the script. That
    is bad, and Lua authors value loading speed highly, which is great.

    So in my opinion, the benefits of a smaller core would be
    overshadowed, and smallness is not an end goal per se.

    In fact, I would expect to-be-famous token features to get adopted
    into the language, some day, once the use and syntax of them has
    stabilized (and more than 50% of Lua users are indeed using such
    tokenizers). Maybe, that day will come?

    -asko


    Gavin Wraith kirjoitti 9.11.2006 kello 16.50:

    > Apologies if this email is short on detail. Token
    > filters are being seen as a way of modifying Lua's
    > syntax without interfering with the Lua core. Has
    > anybody thought about looking at them through the
    > other end of the telescope? That is to say, what
    > elements of standard Lua can be replaced by a
    > token-filter built on top of an even smaller core?
    >
    > --
    > Gavin Wraith (gavin@wra1th.plus.com <mailto:gavin@wra1th.plus.com>)
    > Home page: http://www.wra1th.plus.com/