Lately, I've been diving deep into the world of parser and tokenizers, attempting to rebuild Stata from scratch in JavaScript using Chevrotain. It's been an exciting yet incredibly frustrating journey, and I've hit a roadblock that has me questioning my life choices.
The Problem: Context-Sensitive Keywords.
In Stata, it’s perfectly acceptable for command names to double as variable names. Consider this simple example:
generate generate = 2
Here, the first generate
is a command, while the second is simply a variable name. This kind of flexibility is intuitive for us humans, but it can turn into a real headache for a parser.
In my initial implementation, I ran into two major issues:
- My token definitions were matching command keywords like
generate
indiscriminately, regardless of context. - The command tokens were checked before identifier tokens in my lexer's order of precedence.
This created a impossible situation where the parser couldn't distinguish between commands and identifiers with the same name.
I'm not Alone
After some digging through parsing forums and documentation, I discovered that this isn’t an isolated problem. Many language implementers have wrestled with context-sensitive keywords. The common solutions often involve elaborate workarounds ranging from context-tracking mechanisms to entirely separate parsing passes.
Some Suggested Solutions:
Token Precedence
Unfortunately, with my current Chevrotain setup, reordering token precedence isn’t feasible because Chevrotain is a parser generator library for JS that uses LL(k) parsing techniques.
LL(k) parsing techniques
- top-down parser - starts with start symbol on stack, and repeatedly replace nonterminals until string is generated.
- predictive parser - predict next rewrite rule
- first L of LL means - read input string left to right
- second L of LL means - produces leftmost derivation
- k - number of lookahead symbols
It provides tools for building recursive-descent parser with full control over grammar definitions. This doesn’t mean it can’t handle token precedence it definitely can but it requires manual implementation, which involves considerable effort.
Contextual Parsing
Chevrotain has built in support for contextual tokens. We could define our keywords as contextual and enable them only in certain parsing rules.
Post-Lexer Transformation
This is one is an interesting one post-lexing examines the token stream and reclassifies command tokens to identifiers based on their syntactic position.
Each of these methods has its merits and its tradeoffs. If anyone has tackled this issue from a different angle or has additional suggestions, I’d love to hear your thoughts.
Top comments (0)