I use the following pattern:
function Assert(patt, ctx)
return patt + function(s, i)
local line, col = linecol(s, i)
error(format("parsing error: %s at %d:%d", ctx, line, col))
return false
end
end
Rule = rule1 * rule2 * Assert(rule3, "Rule.rule3")
The idea is that if a part of a rule fails to match, you throw an error at that point, passing in helpful info for diagnosing the problem. The "ctx" variable is used as a marker as to where in the grammar tree the error was thrown. You can use the markers to map to more verbose error messages. For example, rule3 in the above example might be a semicolon at the end of an _expression_ statement. If the semicolon doesn't exist, throw an error.
The other technique I use is to trace the rule stack as the parser moves through the grammar hierarchy. You can do this by adding functions to the grammar that track which rules have been matched. It's quite a but of work to do this by hand and having it done automatically requires abstracting LPEG so that you're not working directly with it but with some higher level concepts that know how to distribute the rule tracing functions amongst the grammar rules.
In my DSL Lua module, you can find both of these techniques. DSL does both token and rule tracing if you want it. It can always be disabled.
wes