ANTLR error when generating C file using ANTLRWorks - c

I am using ANTLRWorks 1.5 for C (ANTLR 3.5).
I created a lexer and parser file.
When trying to generate code, it returns as error <[18:52:50] error(100): Script.g:57:2: syntax error: antlr: MissingTokenException(inserted [#-1,0:0='<missing EOF>',<-1>,57:1] at options {)>.
Here is the code, please tell what am I missing.
/* ############################## L E X E R ############################ */
grammar Lexer;
options {
language = C;
output = AST; //generating an AST
ASTLabelType = pANTLT3_BASE_TREE; //specifying a tree walker
k=1; // Only 1 lookahead character required
}
// Define string values - either simple unquoted or complex quoted
STRING : ('a'..'z'|'A'..'Z'|'0'..'9'|'_' | '+')+
| ('"' (~'"')* '"');
// Ignore all whitespace
WS :(' '
| '\t'
| '\r' '\n' { newline(); }
| '\n' { newline(); }
)
{ $setType(Token.SKIP); } ;
// TODO:Single-line comment
LINE_COMMENT : '/*' (~('\n'|'\r'))* ('\n'|'\r'('\n')?)?
{ $setType(Token.SKIP); newline(); } ;
// Punctuation
LBRACE : '<';
RBRACE : '>';
SLASH : '/';
EQUALS : '=';
SEMI : ';';
TRIGGER : ('Trigger');
TRIGGERTYPE : ('Fall') SLASH ('Rise')|('Rise') SLASH ('Fall')|('Fall')|('Rise');
DEFAULT : ('Default TimeSet');
TIMESETVAL : ('TSET_')('0..9')*;
/* ############################## P A R S E R ############################ */
grammar Script;
options {
language=C;
output=AST; // Automatically build the AST while parsing
ASTLabelType=pANTLR3_BASE_TREE;
//k=2; // Need lookahead of two for props without keys (to check for the =)
}
/*tokens {
SCRIPT; // Imaginary token inserted at the root of the script
BLOCK; // Imaginary token inserted at the root of a block
COMMAND; // Imaginary token inserted at the root of a command
PROPERTY; // Imaginary token inserted at the root of a property
}*/
/** Rule to parse Trigger line
*/
trigger : TRIGGER EQUALS TRIGGERTYPE SEMI;
/** Rule to parse TimeSet line
*/
timeset : DEFAULT TIMESETVAL;

Your "combined" grammar Lexer only has lexer rules, while when you only define grammar, ANTLR expects at least 1 parser rule.
There are 3 different types of grammars
combined grammar: grammar Foo, generates:
class FooParser extends Parser and
class FooLexer extends Lexer
parser grammar: parser grammar Bar, generates:
class Bar extends Parser
lexer grammar: lexer grammar Baz, generates:
class Baz extends Lexer
So, in your case, change grammar Lexer; into lexer grammar ScriptLexer; (don't name your lexer grammar Lexer since it is the base lexer class in ANTLR!) and import this lexer in your parser grammar:
parser grammar ScriptParser;
import ScriptLexer;
options {
language=C;
output=AST;
ASTLabelType=pANTLR3_BASE_TREE;
}
// ...
Related:
ANTLR3 Wiki: Composite Grammars

Related

ANTLR island grammars and a non-greedy rule that consumes too much

I'm having a problem with an island grammar and a non-greedy rule used to consume "everything except what I want".
Desired outcome:
My input file is a C header file, containing function declarations along with typedefs, structs, comments, and preprocessor definitions.
My desired output is parsing and subsequent transformation of function declarations ONLY. I would like to ignore everything else.
Setup and what I've tried:
The header file I'm attempting to lex and parse is very uniform and consistent.
Every function declaration is preceded by a linkage macro PK_linkage_m and all functions return the same type PK_ERROR_code_t, ex:
PK_linkage_m PK_ERROR_code_t PK_function(...);
These tokens don't appear anywhere other than at the start of a function declaration.
I have approached this as an island grammar, that is, function declarations in a sea of text.
I have tried to use the linkage token PK_linkage_m to indicate the end of the "TEXT" and the PK_ERROR_code_t token as the start of the function declaration.
Observed problem:
While lexing and parsing a single function declaration works, it fails when I have more than one function declaration in a file. The token stream shows that "everything + all function declarations + PK_ERROR_code_t of last function declaration " are consumed as text, and then only the last function declaration in the file is correctly parsed.
My one line summary is: My non-greedy grammar rule to consume everything before the PK_ERROR_code_t is consuming too much.
What I perhaps incorrectly believe is the solution:
Fix my lexer non-greedy rule somehow so that it consumes everything until it finds the PK_linkage_m token. My non-greedy rule appears to be consume too much.
What I haven't tried:
As this is my first ANTLR project, and my first language parsing project in a very long time, I'd be more than happy to rewrite it if I'm wrong and getting wronger. I was considering using line terminators to skip everything that doesnt start with newline, but I'm not sure how to make that work and not sure how it's fundamentally different.
Here is my lexer file KernelLexer.g4:
lexer grammar KernelLexer;
// lexer should ignore everything except function declarations
// parser should never see tokens that are irrelevant
#lexer::members {
public static final int WHITESPACE = 1;
}
PK_ERROR: 'PK_ERROR_code_t' -> mode(FUNCTION);
PK_LINK: 'PK_linkage_m';
//Doesnt work. Once it starts consuming, it doesnt stop.
TEXT_SEA: .*? PK_LINK -> skip;
TEXT_WS: ( ' ' | '\r' | '\n' | '\t' ) -> skip;
mode FUNCTION;
//These constants must go above ID rule because we want these to match first.
CONST: 'const';
OPEN_BLOCK: '(';
CLOSE_BLOCK: ');' -> mode(DEFAULT_MODE);
COMMA: ',';
STAR: '*';
COMMENTED_NAME: '/*' ID '*/';
COMMENT_RECEIVED: '/* received */' -> skip;
COMMENT_RETURNED: '/* returned */' -> skip;
COMMENT: '/*' .*? '*/' -> skip;
ID : ID_LETTER (ID_LETTER | DIGIT)*;
fragment ID_LETTER: 'a'..'z' | 'A'..'Z' | '_';
fragment DIGIT: '0'..'9';
WS: ( ' ' | '\r' | '\n' | '\t' ) -> skip;//channel(1);
Here is my parser file KernelParser.g4:
parser grammar KernelParser;
options { tokenVocab=KernelLexer; }
file : func_decl+;
func_decl : PK_ERROR ID OPEN_BLOCK param_block CLOSE_BLOCK;
param_block: param_decl*;
param_decl: type_decl COMMENTED_NAME COMMA?;
type_decl: CONST? STAR* ID STAR* CONST?;
Here is a simple example input file:
/*some stuff*/
other stuff;
PK_linkage_m PK_ERROR_code_t PK_CLASS_ask_superclass
(
/* received */
PK_CLASS_t /*class*/, /* a class */
/* returned */
PK_CLASS_t *const /*superclass*/ /* immediate superclass of class */
);
/*some stuff*/
blar blar;
PK_linkage_m PK_ERROR_code_t PK_CLASS_is_subclass
(
/* received */
PK_CLASS_t /*may_be_subclass*/, /* a potential subclass */
PK_CLASS_t /*class*/, /* a class */
/* returned */
PK_LOGICAL_t *const /*is_subclass*/ /* whether it was a subclass */
);
more stuff;
Here is the token output:
line 28:0 token recognition error at: 'more stuff;\r\n'
[#0,312:326='PK_ERROR_code_t',<'PK_ERROR_code_t'>,18:13]
[#1,328:347='PK_CLASS_is_subclass',<ID>,18:29]
[#2,350:350='(',<'('>,19:0]
[#3,369:378='PK_CLASS_t',<ID>,21:0]
[#4,390:408='/*may_be_subclass*/',<COMMENTED_NAME>,21:21]
[#5,409:409=',',<','>,21:40]
[#6,439:448='PK_CLASS_t',<ID>,22:0]
[#7,460:468='/*class*/',<COMMENTED_NAME>,22:21]
[#8,469:469=',',<','>,22:30]
[#9,512:523='PK_LOGICAL_t',<ID>,24:0]
[#10,525:525='*',<'*'>,24:13]
[#11,526:530='const',<'const'>,24:14]
[#12,533:547='/*is_subclass*/',<COMMENTED_NAME>,24:21]
[#13,587:588=');',<');'>,25:0]
[#14,608:607='<EOF>',<EOF>,29:0]
It's always difficult to cope with lexer rules "reading everything but ...", but you are on the right path.
After commenting out the skip action on TEXT_SEA: .*? PK_LINK ; //-> skip;, I have observed that the first function was consumed by a second TEXT_SEA (because lexer rules are greedy, TEXT_SEA gives no chance to PK_ERROR to be seen) :
$ grun Kernel file -tokens input.txt
line 27:0 token recognition error at: 'more stuff;'
[#0,0:41='/*some stuff*/\n\nother stuff;\n\nPK_linkage_m',<TEXT_SEA>,1:0]
[#1,42:292=' PK_ERROR_code_t PK_CLASS_ask_superclass\n(\n/* received */\nPK_CLASS_t
/*class*/, /* a class */\n/* returned */\nPK_CLASS_t *const /*superclass*/
/* immediate superclass of class */\n);\n\n/*some stuff*/\nblar blar;\n\n\n
PK_linkage_m',<TEXT_SEA>,5:12]
[#2,294:308='PK_ERROR_code_t',<'PK_ERROR_code_t'>,17:13]
[#3,310:329='PK_CLASS_is_subclass',<ID>,17:29]
This gave me the idea to use TEXT_SEA both as "sea consumer" and starter of the function mode.
lexer grammar KernelLexer;
// lexer should ignore everything except function declarations
// parser should never see tokens that are irrelevant
#lexer::members {
public static final int WHITESPACE = 1;
}
PK_LINK: 'PK_linkage_m' ;
TEXT_SEA: .*? PK_LINK -> mode(FUNCTION);
LINE : .*? ( [\r\n] | EOF ) ;
mode FUNCTION;
//These constants must go above ID rule because we want these to match first.
CONST: 'const';
OPEN_BLOCK: '(';
CLOSE_BLOCK: ');' -> mode(DEFAULT_MODE);
COMMA: ',';
STAR: '*';
PK_ERROR : 'PK_ERROR_code_t' ;
COMMENTED_NAME: '/*' ID '*/';
COMMENT_RECEIVED: '/* received */' -> skip;
COMMENT_RETURNED: '/* returned */' -> skip;
COMMENT: '/*' .*? '*/' -> skip;
ID : ID_LETTER (ID_LETTER | DIGIT)*;
fragment ID_LETTER: 'a'..'z' | 'A'..'Z' | '_';
fragment DIGIT: '0'..'9';
WS: [ \t\r\n]+ -> channel(HIDDEN) ;
.
parser grammar KernelParser;
options { tokenVocab=KernelLexer; }
file : ( TEXT_SEA | func_decl | LINE )+;
func_decl
: PK_ERROR ID OPEN_BLOCK param_block CLOSE_BLOCK
{System.out.println("---> Found declaration on line " + $start.getLine() + " `" + $text + "`");}
;
param_block: param_decl*;
param_decl: type_decl COMMENTED_NAME COMMA?;
type_decl: CONST? STAR* ID STAR* CONST?;
Execution :
$ grun Kernel file -tokens input.txt
[#0,0:41='/*some stuff*/\n\nother stuff;\n\nPK_linkage_m',<TEXT_SEA>,1:0]
[#1,42:42=' ',<WS>,channel=1,5:12]
[#2,43:57='PK_ERROR_code_t',<'PK_ERROR_code_t'>,5:13]
[#3,58:58=' ',<WS>,channel=1,5:28]
[#4,59:81='PK_CLASS_ask_superclass',<ID>,5:29]
[#5,82:82='\n',<WS>,channel=1,5:52]
[#6,83:83='(',<'('>,6:0]
...
[#24,249:250=');',<');'>,11:0]
[#25,251:292='\n\n/*some stuff*/\nblar blar;\n\n\nPK_linkage_m',<TEXT_SEA>,11:2]
[#26,293:293=' ',<WS>,channel=1,17:12]
[#27,294:308='PK_ERROR_code_t',<'PK_ERROR_code_t'>,17:13]
[#28,309:309=' ',<WS>,channel=1,17:28]
[#29,310:329='PK_CLASS_is_subclass',<ID>,17:29]
[#30,330:330='\n',<WS>,channel=1,17:49]
[#31,331:331='(',<'('>,18:0]
...
[#55,562:563=');',<');'>,24:0]
[#56,564:564='\n',<LINE>,24:2]
[#57,565:565='\n',<LINE>,25:0]
[#58,566:566='\n',<LINE>,26:0]
[#59,567:577='more stuff;',<LINE>,27:0]
[#60,578:577='<EOF>',<EOF>,27:11]
---> Found declaration on line 5 `PK_ERROR_code_t PK_CLASS_ask_superclass
(
PK_CLASS_t /*class*/,
PK_CLASS_t *const /*superclass*/
);`
---> Found declaration on line 17 `PK_ERROR_code_t PK_CLASS_is_subclass
(
PK_CLASS_t /*may_be_subclass*/,
PK_CLASS_t /*class*/,
PK_LOGICAL_t *const /*is_subclass*/
);`
Instead of including .*? at the start of a rule (which I'd always try to avoid), why don't you try to match either:
a PK_ERROR in the default mode (and switch to another mode like you're now doing),
or else match a single character and skip it?
Something like this:
lexer grammar KernelLexer;
PK_ERROR : 'PK_ERROR_code_t' -> mode(FUNCTION);
OTHER : . -> skip;
mode FUNCTION;
// the rest of your rules as you have them now
Note that this will match PK_ERROR_code_t as well for the input "PK_ERROR_code_t_MU ...", so this would be a safer way:
lexer grammar KernelLexer;
PK_ERROR : 'PK_ERROR_code_t' -> mode(FUNCTION);
OTHER : ( [a-zA-Z_] [a-zA-Z_0-9]* | . ) -> skip;
mode FUNCTION;
// the rest of your rules as you have them now
Your parser grammar could then look like this:
parser grammar KernelParser;
options { tokenVocab=KernelLexer; }
file : func_decl+ EOF;
func_decl : PK_ERROR ID OPEN_BLOCK param_block CLOSE_BLOCK;
param_block : param_decl*;
param_decl : type_decl COMMENTED_NAME COMMA?;
type_decl : CONST? STAR* ID STAR* CONST?;
causing your example input to be parsed like this:

the if satement does not work with my grammar

I have an issue with my if statement with my grammar, wich can be found here http://sd-g1.archive-host.com/membres/up/24fe084677d7655eb57ba66e1864081450017dd9/CNew.txt . When I type for example in Ctrl+D :
int k = 0;
if ( k ==0 ){
return k;
}
the tree parser stops at "if(" , and the console does not state any reason. Does anyone know where the issue may comes from please ?
Assuming the entry point of your grammar is translation_unit, it looks like the parser simply stops after it matched a single external_declaration. Try adding the EOF (end of file) token at the end of that rule so that the parser is forced to match the entire input:
translation_unit
: external_declaration+ EOF
;
However, I don't see how an external_declaration would ever match an if-statement (a selection_statement) in your grammar. Perhaps you want to add a statement to your external_declaration:
translation_unit
scope Symbols; // entire file is a scope
#init {
$Symbols::types = new HashSet();
}
: (external_declaration)+ EOF
;
external_declaration
: function_definition
| declaration
| statement
;
after which your input will get properly parsed.

Flex and Bison Calculator

I'm trying to implement a calculator for nor expressions, such as true nor true nor (false nor false) using Flex and Bison, but I keep getting my error message back. Here is my .l file:
%{
#include <stdlib.h>
#include "y.tab.h"
%}
%%
("true"|"false") {return BOOLEAN;}
.|\n {yyerror();}
%%
int main(void)
{
yyparse();
return 0;
}
int yywrap(void)
{
return 0;
}
int yyerror(void)
{
printf("Error\n");
}
Here is my .y file:
/* Bison declarations. */
%token BOOLEAN
%left 'nor'
%% /* The grammar follows. */
input:
/* empty */
| input line
;
line:
'\n'
| exp '\n' { printf ("%s",$1); }
;
exp:
BOOLEAN { $$ = $1; }
| exp 'nor' exp { $$ = !($1 || $3); }
| '(' exp ')' { $$ = $2; }
;
%%
Does anyone see the problem?
The simple way to handle all the single-character tokens, which as #vitaut correctly says you aren't handling at all yet, is to return yytext[0] for the dot rule, and let the parser sort out which ones are legal.
You have also lost the values of the BOOLEANs 'true' and 'false', which should be stored into yylval as 1 and 0 respectively, which will then turn up in $1, $3 etc. If you're going to have more datatypes in the longer term, you need to look into the %union directive.
The reason why you get errors is that your lexer only recognizes one type of token, namely BOOLEAN, but not the newline, parentheses or nor (and you produce an error for everything else). For single letter tokens like parentheses and the newline you can return the character itself as a token type:
\n { return '\n'; }
For nor thought you should introduce a token type like you did for BOOLEAN and add an appropriate rule to the lexer.

How to output the AST built using ANTLR?

I'm making a static analyzer for C.
I have done the lexer and parser using ANTLR in which generates Java code.
Does ANTLR build the AST for us automatically by options {output=AST;}? Or do I have to make the tree myself? If it does, then how to spit out the nodes on that AST?
I am currently thinking that the nodes on that AST will be used for making SSA, followed by data flow analysis in order to make the static analyzer. Am I on the right path?
Raphael wrote:
Does antlr build the AST for us automatically by option{output=AST;}? Or do I have to make the tree myself? If it does, then how to spit out the nodes on that AST?
No, the parser does not know what you want as root and as leaves for each parser rule, so you'll have to do a bit more than just put options { output=AST; } in your grammar.
For example, when parsing the source "true && (false || true && (true || false))" using the parser generated from the grammar:
grammar ASTDemo;
options {
output=AST;
}
parse
: orExp
;
orExp
: andExp ('||' andExp)*
;
andExp
: atom ('&&' atom)*
;
atom
: 'true'
| 'false'
| '(' orExp ')'
;
// ignore white space characters
Space
: (' ' | '\t' | '\r' | '\n') {$channel=HIDDEN;}
;
the following parse tree is generated:
(i.e. just a flat, 1 dimensional list of tokens)
You'll want to tell ANTLR which tokens in your grammar become root, leaves, or simply left out of the tree.
Creating AST's can be done in two ways:
use rewrite rules which look like this: foo : A B C D -> ^(D A B);, where foo is a parser rule that matches the tokens A B C D. So everything after the -> is the actual rewrite rule. As you can see, the token C is not used in the rewrite rule, which means it is omitted from the AST. The token placed directly after the ^( will become the root of the tree;
use the tree-operators ^ and ! after a token inside your parser rules where ^ will make a token the root, and ! will delete a token from the tree. The equivalent for foo : A B C D -> ^(D A B); would be foo : A B C! D^;
Both foo : A B C D -> ^(D A B); and foo : A B C! D^; will produce the following AST:
Now, you could rewrite the grammar as follows:
grammar ASTDemo;
options {
output=AST;
}
parse
: orExp
;
orExp
: andExp ('||'^ andExp)* // Make `||` root
;
andExp
: atom ('&&'^ atom)* // Make `&&` root
;
atom
: 'true'
| 'false'
| '(' orExp ')' -> orExp // Just a single token, no need to do `^(...)`,
// we're removing the parenthesis. Note that
// `'('! orExp ')'!` will do exactly the same.
;
// ignore white space characters
Space
: (' ' | '\t' | '\r' | '\n') {$channel=HIDDEN;}
;
which will create the following AST from the source "true && (false || true && (true || false))":
Related ANTLR wiki links:
Tree construction
Tree parsing
Tree construction facilities
Raphael wrote:
I am currently thinking that the nodes on that AST will be used for making SSA, followed by data flow analysis in order to make the static analyzer. Am I on the right path?
Never did anything like that, but IMO the first thing you'd want is an AST from the source, so yeah, I guess your on the right path! :)
EDIT
Here's how you can use the generated lexer and parser:
import org.antlr.runtime.*;
import org.antlr.runtime.tree.*;
import org.antlr.stringtemplate.*;
public class Main {
public static void main(String[] args) throws Exception {
String src = "true && (false || true && (true || false))";
ASTDemoLexer lexer = new ASTDemoLexer(new ANTLRStringStream(src));
ASTDemoParser parser = new ASTDemoParser(new CommonTokenStream(lexer));
CommonTree tree = (CommonTree)parser.parse().getTree();
DOTTreeGenerator gen = new DOTTreeGenerator();
StringTemplate st = gen.toDOT(tree);
System.out.println(st);
}
}

How to get entire input string in Lex and Yacc?

OK, so here is the deal.
In my language I have some commands, say
XYZ 3 5
GGB 8 9
HDH 8783 33
And in my Lex file
XYZ { return XYZ; }
GGB { return GGB; }
HDH { return HDH; }
[0-9]+ { yylval.ival = atoi(yytext); return NUMBER; }
\n { return EOL; }
In my yacc file
start : commands
;
commands : command
| command EOL commands
;
command : xyz
| ggb
| hdh
;
xyz : XYZ NUMBER NUMBER { /* Do something with the numbers */ }
;
etc. etc. etc. etc.
My question is, how can I get the entire text
XYZ 3 5
GGB 8 9
HDH 8783 33
Into commands while still returning the NUMBERs?
Also when my Lex returns a STRING [0-9a-zA-Z]+, and I want to do verification on it's length, should I do it like
rule: STRING STRING { if (strlen($1) < 5 ) /* Do some shit else error */ }
or actually have a token in my Lex that returns different tokens depending on length?
If I've understood your first question correctly, you can have semantic actions like
{ $$ = makeXYZ($2, $3); }
which will allow you to build the value of command as you want.
For your second question, the borders between lexical analysis and grammatical analysis and between grammatical analysis and semantic analysis aren't hard and well fixed. Moving them is a trade-off between factors like easiness of description, clarity of error messages and robustness in presence of errors. Considering the verification of string length, the likelihood of an error occurring is quite high and the error message if it is handled by returning different terminals for different length will probably be not clear. So if it is possible -- that depend on the grammar -- I'd handle it in the semantic analysis phase, where the message can easily be tailored.
If you arrange for your lexical analyzer (yylex()) to store the whole string in some variable, then your code can access it. The communication with the parser proper will be through the normal mechanisms, but there's nothing that says you can't also have another variable lurking around (probably a file static variable - but beware multithreading) that stores the whole input line before it is dissected.
As you use yylval.ival you already have union with ival field in your YACC source, like this:
%union {
int ival;
}
Now you specify token type, like this:
%token <ival> NUMBER
So now you can access ival field simply for NUMBER token as $1 in your rules, like
xyz : XYZ NUMBER NUMBER { printf("XYZ %d %d", $2, $3); }
For your second question I'd define union like this:
%union {
char* strval;
int ival;
}
and in you LEX source specify token types
%token <strval> STRING;
%token <ival> NUMBER;
So now you can do things like
foo : STRING NUMBER { printf("%s (len %d) %d", $1, strlen($1), $2); }

Resources