[Date Prev][Date Next][Thread Prev][Thread Next]
[Date Index]
[Thread Index]
- Subject: Re: parser : which library ?
- From: Miles Bader <miles@...>
- Date: Tue, 31 May 2011 13:15:10 +0900
Wesley Smith <wesley.hoke@gmail.com> writes:
> I would highly recommend using lpeglist [1] for writing parsers
> because you can divide the lexer and parser into two steps as it
> should be where the lexer generates an array of tokens and the parser
> runs over the tokens to build the AST.
Hmm, but why would you _want_ to do that?
The "LPEG way", of using a single grammar all the way down to the
character level, seems to work great, and in my experience is simpler
and less annoying than "traditional way" (separate lexer/parser)...
[What _is_ the reason for the traditional split anyway? I gather that
part of it was speed, but I haven't found LPEG to be much of a
bottleneck, even with gigantic input files (hundreds of megabytes), so
maybe that justification isn't as relevant these days....]
-Miles
--
o The existentialist, not having a pillow, goes everywhere with the book by
Sullivan, _I am going to spit on your graves_.