* What are the actual cases where you found Lua-based parsing to be too slow?
* Is this unacceptable lack of parsing speed observed in general circumstances (several metaprogramming tools, several kinds of extensions, used for several target programs), or has it been merely observed while using token filters at what they aren't designed for (reshaping/extending the syntax in depth)?
* In these problematic cases, why wasn't pre-compilation a sensible option?
* Most importantly, if you can't see a substantial productivity gain by working in Lua rather than C, why are you using Lua at all? Unless you're sticking to surface syntax mods that would be addressed very adequately by token filters, compilation is the poster child of problems benefiting from high level language features. That's why most of ICFP contest challenges are about compilation...
parsing performance was never good enough to take it "seriously".
I would bet that what isn't taken seriously isn't the parsing performance (who still cares about that on multicore platforms? Even when targetting embedded devices, your development platform has several times more horsepower than required). Fiddling with "skin-deep" syntax does more harm than good 99% of the time: speaking the same language as your fellow developers is way more important than having your "end" keywords replaced by braces or significant indentation, or being able to increment with "++". So not only developers won't accept some additional clutter in their compilation chain to handle such details, they'll also fly away from a platform which neglects human interoperability at such an hard-to-get-wrong level.
Code maintenance is mainly about guessing what your predecessors meant while coding, so gratuitous idiosyncrasies are a time bomb you leave to your successors. New syntaxes are only legitimate when they support new ways of thinking. If you don't feel like they deserve a chapter of conceptual explanations, they're probably not worth it.