lua-users home
lua-l archive

[Date Prev][Date Next][Thread Prev][Thread Next] [Date Index] [Thread Index]



You might be right.  I was merely expecting that something a magnitude or so slower than regular parser (I did run tests with the Lua implementation I had about a year ago) would never be considered seriously as an alternative to the built-in parser. Maybe I've been wrong, and should have given that version more publicity.

Anyways, it seems now best to just get the code out, and let people play with it if they want. Our approaches are different, and maybe all this is good for the next phase of Lua development.

-asko



Fabien kirjoitti 5.11.2007 kello 1:12:

On 11/4/07, Asko Kauppi <askok@dnainternet.net> wrote:
> Could the syntax mod descriptions be written in Lua, using Lua as a
> data description language, but processed in C/C++ (like what LPeg does),
> while still maintaining performance?  Those writing code in Lua must have
> already accepted the performance of Lua.

I've had a working Lua setup about this for about a year, but the
parsing performance was never good enough to take it "seriously".
Then someone mentioned why couldn't the syntax extensions be made in
C, and of course they can. I don't see much added value in making
syntax mods in Lua; the C code does have access to the Lua state
however, which is important for certain kinds of mods (macros etc.).

* What are the actual cases where you found Lua-based parsing to be too slow?

* Is this unacceptable lack of parsing speed observed in general circumstances (several metaprogramming tools, several kinds of extensions, used for several target programs), or has it been merely observed while using token filters at what they aren't designed for (reshaping/extending the syntax in depth)?

* In these problematic cases, why wasn't pre-compilation a sensible option?

* Most importantly, if you can't see a substantial productivity gain by working in Lua rather than C, why are you using Lua at all? Unless you're sticking to surface syntax mods that would be addressed very adequately by token filters, compilation is the poster child of problems benefiting from high level language features. That's why most of ICFP contest challenges are about compilation...

 parsing performance was never good enough to take it "seriously".

I would bet that what isn't taken seriously isn't the parsing performance (who still cares about that on multicore platforms? Even when targetting embedded devices, your development platform has several times more horsepower than required). Fiddling with "skin-deep" syntax does more harm than good 99% of the time: speaking the same language as your fellow developers is way more important than having your "end" keywords replaced by braces or significant indentation, or being able to increment with "++". So not only developers won't accept some additional clutter in their compilation chain to handle such details, they'll also fly away from a platform which neglects human interoperability at such an hard-to-get-wrong level.

Code maintenance is mainly about guessing what your predecessors meant while coding, so gratuitous idiosyncrasies are a time bomb you leave to your successors. New syntaxes are only legitimate when they support new ways of thinking. If you don't feel like they deserve a chapter of conceptual explanations, they're probably not worth it.