My scanner is works fine but i couldn't find whats wrong with my parser
semi: "{" vallist "}"
| "{" "}""
;
val: tSTR
| tInt
;
vallist: vallist , val
| val
;
You have a number of problems, some of which are probably just typos in your copy-paste (what you have above will be rejected by bison).
Your main problem is probably using " (double quotes) for your tokens, which for the most part doesn't do anything useful -- it creates a 'new' token that is not the same as the single character token your lexer probably returns.
Instead, you want to use ' (single quotes):
semi: '{' vallist '}'
| '{' '}'
;
val: tSTR
| tInt
| semi
;
vallist: vallist ',' val
| val
;
Related
So in all cases of AST examples, there are productions of the following kind:
expr -> expr "+" expr;
expr -> expr "-" expr;
And in this case it's easy to create a new node like this:
expr: expr "+" expr {newNode("+",$1,$3);}
;
Now my grammar has the following implementation:
assignment:IDENTIFIER '=' expression ';'
;
expression:term expression_1
;
expression_1: '+' term expression_1 |
'-' term expression_1 |
;
term: factor term_1
;
term_1: '*' factor term_1 |
'/' factor term_1 |
;
factor: IDENTIFIER |
'(' expression ')' |
NUM | FNUM | STRING
;
Here, while making a new node, how do I take the first operand(which is in a previous production), and feed that into a newNode function which will have the operator and the second operand(both of these are together in a different production)?
I am reading a compiler tutorial here www.buildyourownlisp.com. It uses a parser combinator called mpc. What I have at the moment will parse polish notation, but I'm trying to work out how to use standard notation with it. I just can't seem to how to do it.
The rules of the parser are as follows:
. Any character is required.
a The character a is required.
[abcdef] Any character in the set abcdef is required.
[a-f] Any character in the range a to f is required.
a? The character a is optional.
a* Zero or more of the character a are required.
a+ One or more of the character a are required.
^ The start of input is required.
$ The end of input is required.
"ab" The string ab is required.
'a' The character a is required.
'a' 'b' First 'a' is required, then 'b' is required.
'a' | 'b' Either 'a' is required, or 'b' is required.
'a'* Zero or more 'a' are required.
'a'+ One or more 'a' are required.
<abba> The rule called abba is required.
The polish notation is written like this:
" \
number: /-?[0-9]+/'.'?/[0-9]+/ ; \
operator: '+' | '-' | '*' | '/' | '%' | \"add\" | \"sub\" | \"mul\" | \"div\" ; \
expr: <number> | '(' <operator> <expr>+ ')' ; \
dlispy: /^/ <operator> <expr>+ /$/ ;",
I've managed to make it accept decimal numbers by adding '.'?/[0-9]+/, but i can't work out how to restructure it to accept standard notation eg 2*(3+2) instead of *2 (+ 3 2). I know that I'll have to rewrite the expr and the dlispy rules, but I'm new to regex and BNF. Hope you can help, thanks
Written as yacc rules, that would be:
expr : '(' expr ')
| expr operator expr
| number
;
How to enable a start condition at the beginning of a rule and disable it at the end ? I have to ignore whitespace with some bison rules only.
How to ignore whitespace inside nested brackets.
define_directive:
DEFINE '(' class_name ')'{ ... }
;
I'm trying to write a parser for this sample code with some more rules.
#/*
* #Template Family
* #Description sample script template for Mate Programming language
* (multi-line comment)
*/
#namespace(sample)
#require(String fatherName)
#require(String motherName)
#require(Array childrenNames)
#define(Family : Template) #// end of header anything can go in body section below (comment)
Family Description
==================
Father's Name: #(fatherName)
Mother's Name: #(motherName)
Number of child: #(childrenNamesCount,0) #// valuation operator is null safe (comment)
List of children's names
------------------------
#foreach(childName:childrenNames)
> #(childName)
#empty
> there is no child name to display.
#end
##(varName) #// this should not be interpreted because escaped with # (comment)
Lexer and parser partially implemented. My problem is how to deal with whitespace inside statement keywords like #foreach, #require.
Whitespaces should be ignored for these.
desired sample output
Family Description
==================
Father's Name: Mira
Mother's Name: James
Number of child: 0
List of children's names
------------------------
> there is no child name to display.
##(varName)
bison file content
command:
fileword
| valuation
| alternative
| loop
| command_directive
;
fileword:
tokenword { scriptlangy_echo(yytext,"fileword.tokenword"); }
| MAGICESC { scriptlangy_echo("#","fileword.MAGICESC"); }
;
tokenword:
IDENTIFIER | NUMBER | STRING_LITERAL | WHITESPACE
| INC_OP | DEC_OP | AND_OP | OR_OP | LE_OP | GE_OP | EQ_OP | NE_OP | L_OP | G_OP
| ';' | ',' | ':' | '=' | ']' | '.' | '&' | '[' | '!' | '~' | '-' | '+' | '*' | '/' | '%' | '^' | '|' | ')' | '}' | '?' | '{' | '('
;
valuation:
'#' '(' expression ')' {
fprintf(yyout, "<val>");
}
| '#' '(' expression ',' default_value ')' {
fprintf(yyout, "<val>");
}
;
loop:
for_loop
| foreach_loop
| while_loop
;
while_loop:
WHILE '(' expression ')' end_block
| WHILE '(' expression ')' commands end_block
;
for_loop:
FOR '(' expression_statement expression_statement expression')' end_block
| FOR '(' expression_statement expression_statement expression')' commands end_block
;
foreach_loop:
foreach_block end_block
| foreach_block empty_block end_block
;
foreach_block:
FOREACH '(' IDENTIFIER ')'
| FOREACH '(' IDENTIFIER ':' expression')' commands
;
The key part of your question seems to be this:
I have to ignore whitespace with some bison rules only. How to ignore
whitespace inside nested brackets.
As I remarked in comments, your implementation idea of somehow doing this by having your parser rules manipulate scanner start conditions is pretty much a non-starter. Forget about that.
Since evidently your scanner does not, in general, ignore whitespace, it must emit tokens that represent whitespace, or perhaps tokens that represent something else plus whitespace (ugly). If it emits whitespace tokens then the thing to do is simply to account for them in your grammar rules. This is completely possible. In fact, you can build a parser for any context-free language on top of a scanner that just returns every character as its own token. The scanner / parser dichotomy is a functional and conceptual convenience, not a necessity.
For example, then, suppose we want to be able to parse numeric array literals, formed as a nonempty, comma-delimited list of decimal numbers enclosed in curly braces, with optional whitespace around commas and inside the braces. Suppose further that we have these terminal symbols to work with:
OPEN // open brace
CLOSE // close brace
NUM // maximal sequence of one or more decimal digits
COMMA // a comma
WS // a maximal run of whitespace
We might then write these rules:
array: array_start array_elements CLOSE;
array_start: OPEN
| OPEN WS
;
array_elements: array_element
| array_elements array_separator array_element
;
array_element: NUM
| NUM WS
;
array_separator: COMMA
| COMMA WS
;
There are, of course, many other ways to set up the details, but, generally speaking, this is how you handle whitespace with parser rules: not by ignoring it, but by accepting it.
I'm having trouble fixing a shift reduce conflict in my grammar. I tried to add -v to read the output of the issue and it guides me towards State 0 and mentions that my INT and FLOAT is reduced to variable_definitions by rule 9. I cannot see the conflict and I'm having trouble finding a solution.
%{
#include <stdio.h>
#include <stdlib.h>
%}
%token INT FLOAT
%token ADDOP MULOP INCOP
%token WHILE IF ELSE RETURN
%token NUM ID
%token INCLUDE
%token STREAMIN ENDL STREAMOUT
%token CIN COUT
%token NOT
%token FLT_LITERAL INT_LITERAL STR_LITERAL
%right ASSIGNOP
%left AND OR
%left RELOP
%%
program: variable_definitions
| function_definitions
;
function_definitions: function_head block
| function_definitions function_head block
;
identifier_list: ID
| ID '[' INT_LITERAL ']'
| identifier_list ',' ID
| identifier_list ',' ID '[' INT_LITERAL ']'
;
variable_definitions:
| variable_definitions type identifier_list ';'
;
type: INT
| FLOAT
;
function_head: type ID arguments
;
arguments: '('parameter_list')'
;
parameter_list:
|parameters
;
parameters: type ID
| type ID '['']'
| parameters ',' type ID
| parameters ',' type ID '['']'
;
block: '{'variable_definitions statements'}'
;
statements:
| statements statement
;
statement: expression ';'
| compound_statement
| RETURN expression ';'
| IF '('bool_expression')' statement ELSE statement
| WHILE '('bool_expression')' statement
| input_statement ';'
| output_statement ';'
;
input_statement: CIN
| input_statement STREAMIN variable
;
output_statement: COUT
| output_statement STREAMOUT expression
| output_statement STREAMOUT STR_LITERAL
| output_statement STREAMOUT ENDL
;
compound_statement: '{'statements'}'
;
variable: ID
| ID '['expression']'
;
expression_list:
| expressions
;
expressions: expression
| expressions ',' expression
;
expression: variable ASSIGNOP expression
| variable INCOP expression
| simple_expression
;
simple_expression: term
| ADDOP term
| simple_expression ADDOP term
;
term: factor
| term MULOP factor
;
factor: ID
| ID '('expression_list')'
| literal
| '('expression')'
| ID '['expression']'
;
literal: INT_LITERAL
| FLT_LITERAL
;
bool_expression: bool_term
| bool_expression OR bool_term
;
bool_term: bool_factor
| bool_term AND bool_factor
;
bool_factor: NOT bool_factor
| '('bool_expression')'
| simple_expression RELOP simple_expression
;
%%
Your definition of a program is that it is either a list of variable definitions or a list of function definitions (program: variable_definitions | function_definitions;). That seems a bit odd to me. What if I want to define both a function and a variable? Do I have to write two programs and somehow link them together?
This is not the cause of your problem, but fixing it would probably fix the problem as well. The immediate cause is that function_definitions is one or more function definition while variable_definitions is zero or more variable definitions. In other words, the base case of the function_definitions recursion is a function definition, while the base case of variable_definitions is the empty sequence. So a list of variable definitions starts with an empty sequence.
But both function definitions and variable definitions start with a type. So if the first token of a program is int, it could be the start of a function definition with return type int or a variable definition of type int. In the former case, the parser should shift the int in order to produce the function_definitions base case:; in the latter case, it should immediately reduce an empty variable_definitions base case.
If you really wanted a program to be either function definitions or variable definitions, but not both. you would need to make variable_definitions have the same form as function_definitions, by changing the base case from empty to type identifier_list ';'. Then you could add an empty production to program so that the parser could recognize empty inputs.
But as I said at the beginning, you probably want a program to be a sequence of definitions, each of which could either be a variable or a function:
program: %empty
| program type identifier_list ';'
| program function_head block
By the way, you are misreading the output file produced by -v. It shows the following actions for State 0:
INT shift, and go to state 1
FLOAT shift, and go to state 2
INT [reduce using rule 9 (variable_definitions)]
FLOAT [reduce using rule 9 (variable_definitions)]
Here, INT and FLOAT are possible lookaheads. So the interpretation of the line INT [reduce using rule 9 (variable_definitions)] is "if the lookahead is INT, immediately reduce using production 9". Production 9 produces the empty sequence, so the reduction reduces zero tokens at the top of the parser stack into a variable_definitions. Reductions do not use the lookahead token, so after the reduction, the lookahead token is still INT.
However, the parser doesn't actually do that because it has a different action for INT, which is to shift it and go to state 1. as indicated by the first line start INT. The brackets [...] indicate that this action is not taken because it is a conflict and the resolution of the conflict was some other action. So the more accurate interpretation of that line is "if it weren't for the preceding action on INT, the lookahead INT would cause a reduction using rule 9."
to solve the dangling else problem, I used the following solution:
stmt : stmt_matched
| stmt_unmatched
;
stmt_unmatched : IF '(' exp ')' stmt
| IF '(' exp ')' stmt_matched ELSE stmt_unmatched
;
stmt_matched : IF '(' exp ')' stmt_matched ELSE stmt_matched
| stmt_for
| ...
;
For defining the rules of grammar about the for loop, I produce a conflict shift/reduce due to the same problem:
stmt_for : FOR '(' exp ';' exp ';' exp ')' stmt
;
How can I solve this problem?
Not all for statements are matched. Consider, for example
if (c) for (;;) if (d) ; else ;
So it is necessary to divide for statements into for_matched and for_unmatched. (And similarly with other compound statements such as while.)