I am trying to write a minipython compiler, and as we know, Python works with spaces to define a block. In my situation, I defined the block as exactly 4 spaces, but when I want to create a block with multiple lines it tells that there is a shift/reduce conflict. I guess I know where the problem is; it doesn't know to treat the second line as space or tab, but I am not sure, so here is my code:
lexical :
" " {column = column + strlen(yytext); return mc_tab;}
" " {column = column + strlen(yytext);}
syntaxic :
S : INSTRUCTION S
| {printf("\nWINNER WINNER CHICKEN DINNER\n");YYACCEPT;}
;
INSTRUCTION : DECL mc_jmp | LOOP | COND | mc_jmp
;
COND : IF ELSE
;
IF : mc_if mc_opnArc COMPARISION mc_clsArc mc_dblpnt mc_jmp BEGIN_BLOCK
;
ELSE : mc_else mc_dblpnt BEGIN_BLOCK |
BEGIN_BLOCK : mc_tab INSTRUCTION BEGIN_BLOCK |
;
Just to know, when I deleted the recursion in BEGIN_BLOCK, the conflict is gone.
EDIT :
and there is another problem, I guess if we solved it, then the first will be solved too.
when I write TAB in any line of code it will be treated exactly as that tab didn't exist, so the code treated the tab exactly as it treats the 4 spacements
As I wrote in comments, shift / reduce conflicts are parser issues. They are being reported by Bison in your case, and they are a function of your grammar (only). If you ask it to do so via either or both of -v or -r all, Bison will produce an output file that shows you (among other things) exactly where such conflicts occur.
The grammar presented in the question is incomplete, but I made it into an input file that Bison would accept by adding section delimiters and adding a %token declaration for each symbol that is not otherwise defined. I also added the semicolon that appears to have been intended after the definition of the ELSE symbol, before the definition of BEGIN_BLOCK. Bison reported four shift / reduce conflicts for the result:
when the token on top of the stack is an IF and the next token is an mc_else, it is ambiguous whether to reduce zero tokens to an ELSE or to shift the mc_else in anticipation of performing a later reduction to an ELSE. This arises in part because the grammar accommodates nested conditionals, so it is a manifestation of the common issue of matching elses with the appropriate ifs.
when the tokens on top of the stack are mc_else mc_dblpnt and the next token is an mc_tab, it is ambiguous whether to reduce zero tokens to a BEGIN_BLOCK or to shift the mc_tab in anticipation of an INSTRUCTION to follow.
when a token sequence that can be reduced to a BEGIN_BLOCK is required and the next token is an mc_tab, it is ambiguous whether to reduce zero tokens to a BEGIN_BLOCK or to shift the mc_tab in anticipation of reducing to a BEGIN_BLOCK via the other production for that.
when the tokens on top of the stack are mc_if mc_opnArc COMPARISION mc_clsArc mc_dblpnt mc_jmp and the next token is an mc_tab, it is ambiguous whether to reduce zero tokens to a BEGIN_BLOCK or to shift the mc_tab in anticipation of an INSTRUCTION to follow.
A common theme emerges: empty rules are biting you hard. Such rules are by no means the only way that a shift / reduce conflict can emerge, but I presume that you will recognize that allowing the parser to create a token out of nothing is something to be handled with considerable care.
The name and usage of BEGIN_BLOCK in particular suggest a design problem, especially in conjunction with the fact that there is no corresponding END_BLOCK. Python's own parsing approach relies on the lexical analyzer to track indentation levels, and to emit synthetic indent and corresponding dedent tokens, as appropriate, when the indentation level changes. Sometimes a sequence of multiple dedents must be emitted to maintain correct indent / dedent pairing. And again, indents and dedents correspond to indentation level changes, not individual characters.
Making the lexer track indents and corresponding dedents allows for grammatic rules along these lines:
if_stmt: IF conditional_expr COLON block optional_else ;
optional_else: /* empty */
| ELSE COLON block
;
block: INDENT stmts DEDENT ;
stmts: stmt
| stmts stmt /* note _left_ recursion */
;
stmt: ...
conditional_expr: ...
Note that the block structure is completely unambiguous -- a block begins with an indent and ends with a matching dedent. Although it may not be immediately obvious, that takes care not only of ambiguities such as arise from your empty BEGIN_BLOCK productions, but also ambiguities such as arise from your rules providing for optional else clauses. The latter are addressed because now the grammar allows at most one if with which any given else can pair.
You could do similar.
Related
I am new in flex and I want to design a scanner using flex.
At this step, I want to make regular expression to match with id, but here are some conditions:
underline can exist in id
you can use _ whenever you want, but if you are using them exactly
consequently it can be at most 2 underlines for example :
a_b_c »»»» true
a___b »»»» false
123abv »»»» false
integers can't be at the beginning of an id
underline can't exist at the end of an id
The regular expression I have written for that is :
(\b(_{0,2}[A-Za-z][0-9A-Za-z]*(_{0,2}[0-9A-Za-z]+)*)\b)
but now I have 2 questions:
Is the regular expression true? I have tested it in rubular.com and I think this is true but I'm not sure?
The other important problem is that when I write this in my flex file, Unfortunately no id is identified. But I can't why it is not recognized
Can anyone please help me?
The problem here is your ID regular expression. You are using \b to match a word boundary, but Flex's regular expressions have no built-in support for matching word boundaries. Other than that, your regular expression is sound. I was able to get your code working using this modified version of yours: _{0,2}[A-Za-z][0-9A-Za-z]*(_{0,2}[0-9A-Za-z]+)*. (I just got rid of the \b's, and some of the parentheses that bothered me).
Unfortunately, this causes a slight problem. Say that you're lexing and run across something like 12_345. Flex will read 12, assume that it found an IC, and then read _. Finding no match, it will print that to stdout, then read 345 as another IC.
In order to avoid this issue (caused by Flex's lack of word boundaries), you could do one of two things:
Create a rule at the end that matches any character (other than whitespace) and make it give an error. This would stop Flex when it got to _ in the example above.
Create a rule at the end that matches any combination of letters, numbers, and underscores ([_0-9A-Za-z]+). If it is matched, give an error. This will cause Flex to return the entire token 12_345 as an error in the above example.
One other problem: The ID regular expression still won't match anything with underscores at the end of it. This means your current regular expression isn't perfect, and you'll need to do some tweaking with it, but now you know not to use the \b symbol. Here is a reference on Flex's regular expression syntax so you can find other things to use/avoid.
I think your requirement is:
Identifiers can use only alphanumeric characters and _
Identifiers cannot start with a number
Identifiers cannot end with an _
Identifiers cannot include more than two consecutive _
(When I first read your question, I thought the last requirement was that identifiers cannot include more than two _, but looking at the proposed regex, I think the version above is more accurate.)
Based on the above, you should be able to use the following two Flex patterns:
([[:alpha:]]|__?[[:alnum:]])(_?_?[[:alnum:]])* { /* Handle an identifier */ }
[[:alpha:]_][[:alnum:]_]* { /* Error */ }
Breaking that down:
([[:alpha:]]|__?[[:alnum:]]) matches an alphabetic character or one or two _ followed by an alphanumeric character.
(_?_?[[:alnum:]])* matches a string of and alphanumeric characters, with a maximum of two before an alphanumeric character.
The second pattern will match anything which starts with an alphabetic character or followed by any number of alphanumerics or . This will match all valid identifiers as well as the sequences which contain too many consecutive or which end with . If both patterns match (that is, a valid identifier), the first one will win, so it will be correctly recognized. The second pattern will consume the entire erroneous identifier, allowing for easier error recovery.
The pattern in the OP doesn't work because flex treats \b as a backspace character (as in C). Flex does not implement word boundary assertions, but in a lexer you almost never need these; the pattern above can be used if necessary.
I'm trying to program a lexical analyzer to a standard C translation unit, so I've divided the possible tokens into 6 groups; for each group there's a regular expression, which will be converted to a DFA:
Keyword - (will have a symbol table containing "goto", "int"....)
Identifers - [a-zA-z][a-zA-Z0-9]*
Numeric Constants - [0-9]+/.?[0-9]*
String Constants - ""[EVERY_ASCII_CHARACTER]*""
Special Symbols - (will have a symbol table containing ";", "(", "{"....)
Operators - (will have a symbol table containing "+", "-"....)
My Analyzer's input is a stream of bytes/ASCII characters. My algorithm is the following:
assuming there's a stream of characters, x1...xN
foreach i=1, i<=n, i++
if x1...xI accepts one or more of the 6 group's DFA
{
take the longest-token
add x1...xI to token-linked-list
delete x1...xI from input
}
However, this algorithm will assume that every byte it is given, which is a letter, is an identifier, since after an input of 1 character, it accepts the DFA of the identifiers tokens ([a-zA-Z][a-zA-Z0-9]*).
Another possible problem is for the input "intx;", my algorithm will tokenize this stream into "int", "x", ";" which of course is an error.
I'm trying to think about a new algorithm, but I keep failing. Any suggestions?
Code your scanner so that it treats identifiers and keywords the same until the reading is finished.
When you have the complete token, look it up in the keyword table, and designate it a keyword if you find it and as an identifier if you don't find it. This deals with the intx problem immediately; the scanner reads intx and that's not a keyword so it must be be an identifier.
I note that your identifiers don't allow underscores. That's not necessarily a problem, but many languages do allow underscores in identifiers.
Tokenizers generally FIRST split the input stream into tokens, based on rules which dictate what constitute an END of token, and only later decide what kind of token it is (or an error otherwise). Typical end of token are things like white space (when not part of literal string), operators, special delimiters, etc.
It seems you are missing the greediness aspect of competing DFAs. greedy matching is usually the most useful (left-most longest match) because it solves the problem of how to choose between competing DFAs. Once you've matched int you have another node in the IDENTIFIER DFA that advances to intx. Your finate automata doesn't exit until it reaches something it can't consume, and if it isn't in a valid accept state at the end of input, or at the point where another DFA is accepting, it is pruned and the other DFA is matched.
Flex, for example, defaults to greedy matching.
In other words, your proposed problem of intx isn't a problem...
If you have 2 rules that compete for int
rule 1 is the token "int"
rule 2 is IDENTIFIER
When we reach
i n t
we don't immediately ACCEPT int because we see another rule (rule 2) where further input x progresses the automata to a NEXT state:
i n t x
If rule 2 is in an ACCEPT state at that point, then rule 1 is discarded by definition. But if rule 2 is still not in ACCEPT state, we must keep rule 1 around while we examine more input to see if we could eventually reach an ACCEPT state in rule 2 that is longer than rule 1. If we receive some other character that matches neither rule, we check if rule 2 automata is in an ACCEPT state for intx, if so, it is the match. If not, it is discarded, and the longest previous match (rule 1) is accepted, however in this case, rule 2 is in ACCEPT state and matches intx
In the case that 2 rules reach an ACCEPT or EXIT state simultaneously, then precedence is used (order of the rule in the grammar). Generally you put your keywords first so IDENTIFIER doesn't match first.
I have an homework to do for my school. The goal is to create a really basic virtual machine as well as a simple assembler. I had no problem creating the virtual machine but I can't think of a 'nice' way to create the assembler.
The grammar of this assembler is really basic: an optional label followed by a colon, then a mnemonic followed by 1, 2 or 3 operands. If there is more than one operand they shall be separated by commas. Also, whitespaces are ignored as long as they don't occur in the middle of a word.
I'm sure I can do this with strtok() and some black magic, but I'd prefer to do it in a 'clean' way. I've heard about Parse Trees/AST, but I don't know how to translate my assembly code into these kinds of structures.
I wrote an assembler like this when I was a teenager. You don't need a complicated parser at all.
All you need to do is five steps for each line:
Tokenize (i.e. split the line into tokens). This will give you an array of tokens and then you don't need to worry about the whitespace, because you will have removed it during tokenization.
Initialize some variables representing parts of the line to NULL.
A sequence of if statements to walk over the token array and check which parts of the line are present. If they are present put the token (or a processed version of it) in the corresponding variable, otherwise leave that variable as NULL (i.e. do nothing).
Report any syntax errors (i.e. combinations of types of tokens that are not allowed).
Code generation - I guess you know how to do this part!
What you're looking for is actually lexical analyses, parsing en finally the generation of the compiled code. There are a lot of frameworks out there which helps creating/generating a parser like Gold Parser or ANTLR. Creating a language definition (and learning how to depending on the framework you use) is most often quite a lot of work.
I think you're best off with implementing the shunting yard algorithm. Which converts your source into a representation computers understand, which makes it easy to understand for your virtual machine.
I also want to say that diving into parsers, abstract syntax trees, all the tools available on the web and reading a lot of papers about this subject is a really good learning experience!
You can take a look at some already-made assemblers, like PASMO: an assmbler for Z80 CPU, and get ideas from it. Here it is:
http://pasmo.speccy.org/
I've written a couple of very simple assemblers, both of them using string manipulation with strtok() and the like. For a simple grammar like the assembly language is, it's enough. Key pieces of my assemblers are:
A symbol table: just an array of structs, with the name of a symbol and its value.
typedef struct
{
char nombre[256];
u8 valor;
} TSymbol;
TSymbol tablasim[MAXTABLA];
int maxsim = 0;
A symbol is just a name that have associated a value. This value can be the current position (the address where the next instruction will be assembled), or it can be an explicit value assigned by the EQU pseudoinstruction.
Symbol names in this implementation are limited to 255 characters each, and one source file is limited to MAXTABLA symbols.
I perform two passes to the source code:
The first one is to identify symbols and store them in the symbol table, detecting whether they are followed by an EQU instruction or not. If there is such, the value next to EQU is parsed and assigned to the symbol. In other case, the value of the current position is assigned. To update the current position I have to detect if there is a valid instruction (although I do not assemble it yet) and update it acordingly (this is easy for me because my CPU has a fixed instruction size).
Here you have a sample of my code that is in charge of updating the symbol table with a value from EQU of the current position, and advancing the current position if needed.
case 1:
if (es_equ (token))
{
token = strtok (NULL, "\n");
tablasim[maxsim].valor = parse_numero (token, &err);
if (err)
{
if (err==1)
fprintf (stderr, "Error de sintaxis en linea %d\n", nlinea);
else if (err==2)
fprintf (stderr, "Simbolo [%s] no encontrado en linea %d\n", token, nlinea);
estado = 2;
}
else
{
maxsim++;
token = NULL;
estado = 0;
}
}
else
{
tablasim[maxsim].valor = pcounter;
maxsim++;
if (es_instruccion (token))
pcounter++;
token = NULL;
estado = 0;
}
break;
The second pass is where I actually assemble instructions, replacing a symbol with its value when I find one. It's rather simple, using strtok() to split a line into its components, and using strncasecmp() to compare what I find with instruction mnemonics
If the operands can be expressions, like "1 << (x + 5)", you will need to write a parser. If not, the parser is so simple that you do not need to think in those terms. For each line get the first string (skipping whitespace). Does the string end with a colon? then it is a label, else it is the menmonic. etc.
For an assembler there's little need to build an explicit parse tree. Some assemblers do have fancy linkers capable of resolving complicated expressions at link-time time but for a basic assembler an ad-hoc lexer and parsers should do fine.
In essence you write a little lexer which consumes the input file character-by-character and classifies everything into simple tokens, e.g. numbers, labels, opcodes and special characters.
I'd suggest writing a BNF grammar even if you're not using a code generator. This specification may then be translated into a recursive-decent parser almost by-wrote. The parser simply walks through the whole code and emits assembled binary code along the way.
A symbol table registering every label and its value is also needed, traditionally implemented as a hash table. Initially when encountering an unknown label (say for a forward branch) you may not yet know the value however. So it is simply filed away for future reference.
The trick is then to spit out dummy values for labels and expressions the first time around but compute the label addresses as the program counter is incremented, then take a second pass through the entire file to fill in the real values.
For a simple assembler, e.g. no linker or macro facilities and a simple instruction set, you can get by with perhaps a thousand or so lines of code. Much of it brainless through-free hand translation from syntax descriptions and opcode tables.
Oh, and I strongly recommend that you check out the dragon book from your local university library as soon as possible.
At least in my experience, normal lexer/parser generators (e.g., flex, bison/byacc) are all but useless for this task.
When I've done it, nearly the entire thing has been heavily table driven -- typically one table of mnemonics, and for each of those a set of indices into a table of instruction formats, specifying which formats are possible for that instruction. Depending on the situation, it can make sense to do that on a per-operand rather than a per-instruction basis (e.g., for mov instructions that have a fairly large set of possible formats for both the source and the destination).
In a typical case, you'll have to look at the format(s) of the operand(s) to determine the instruction format for a particular instruction. For a fairly typical example, a format of #x might indicate an immediate value, x a direct address, and #x an indirect address. Another common form for an indirect address is (x) or [x], but for your first assembler I'd try to stick to a format that specifies instruction format/addressing mode based only on the first character of the operand, if possible.
Parsing labels is simpler, and (mostly) separate. Basically, each label is just a name with an address.
As an aside, if possible I'd probably follow the typical format of a label ending with a colon (":") instead of a semicolon (";"). Much more often, a semicolon will mark the beginning of a comment.
I have some experience writing parsers with ANTLR and I am trying (for self-education :) ) to port one of them to PEG (Parsing Expression Grammar).
As I am trying to get a feel for the idea, one thing strikes me as cumbersome, to the degree that I feel I have missed someting: How to deal with whitespace.
In ANTLR, the normal way to deal with whitespace and comments were to put the tokens in a hidden channel, but with PEG grammars there is no tokenization step. Considering languages such as C or Java, where comments are allowed almost everywhere, one would like to "hide" the comments right away, but since the comments may have semantic meaning (for example when generating code documentation, class diagrams, etc), one would not just like to discard them.
So, is there a way to deal with this?
Because there is no separate tokenization phase, there is no "time" to discard certain characters (or tokens).
Since you're familiar with ANTLR, think of it like this: let's say ANTLR handles only PEG. So you only have parser rules, no lexer rules. Now how would you discard, say, spaces? (you can't).
So, the answer to you question is: you can't, you'll have to litter your grammar with space-rules in the PEG:
ANTLR
add_expr
: Num Add Num
;
Add : '+';
Num : '0'..'9'+;
Space : ' '+ {skip();};
PEG
add_expr
: num _ '+' _ num
;
num : '0'..'9'+;
_ : ' '*;
It is possible to nest PEG parsers. The idea is that the first parsers consumes characters and feeds tokens to the second parser. The second PEG parser consumes tokens and does the real work.
Of course this means that you give up one advantage of Parsing Expression Grammar compared to other parsing schemes: The simplicity of PEG.
I'm writing code for the exercise 1-24, K&R2, which asks to write a basic syntactic debugger.
I made a parser with states normal, dquote, squote etc...
So I'm wondering if a code snippet like
/" text "
is allowed in the code? Should I report this as an error? (The problem is my parser goes into comment_entry state after / and ignores the ".)
Since a single / just means division it should not be interpreted as a comment. There is no division operator defined for strings, so something like "abc"/"def" doesn't make much sense, but it should not be a syntax error. Figuring out if this division is possible should not be done by the parser, but be left for later stages of the compilation to be decided there.
That is syntactically valid, but not semantically. It should parse as the division operator followed by a string literal. You can't divide stuff by a string literal, so it's not legal code, overall.
Comments start with a two-character token, /*, and end with */.
As a standalone syntactical element this should be reported as an error.
Theoretically (as part of an expression) it would be possible to write
a= b /"text"; / a = b divided through address of string literal "text"
which is also wrong (you can't divide through a pointer).
But on the surface level would seem okay because it would syntactically decode as: variable operator variable operator constant-expression (address of string).
The real error would probably have to be caught in a deeper state of syntactical analysis (i.e. when checking if given types are suitable for the division operator).