checking unfinished comments in flex - c

I am a new to flex. I have just written a sample code to detect multi line comments using a flex program. Now I want to improve the code. I want to detect unfinished and ill formed comments in the code. for example: a comment beginning with /* without an ending */ is an unfinished comment and by ill formed comment I mean the comment is not properly formed, say, an EOF appears inside the comment etc. What I have to add in my code to check these things? My sample code is as follows:
%x COMMENT_MULTI_LINE
%{
char* commentStart;
%}
%%
[\n\t\r ]+ {
/* ignore whitespace */ }
<INITIAL>"/*" {
commentStart = yytext;
BEGIN(COMMENT_MULTI_LINE);
}
<COMMENT_MULTI_LINE>"*/" {
char* comment = strndup(commentStart, yytext + 2 - commentStart);
printf("'%s': was a multi-line comment\n", comment);
free(comment);
BEGIN(INITIAL);
}
<COMMENT_MULTI_LINE>. {
}
<COMMENT_MULTI_LINE>\n {
}
%%
int main(int argc, char *argv[]){
yylex();
}

The flex manual section on using <<EOF>> is quite helpful as it has exactly your case as an example, and their code can also be copied verbatim into your flex program.
As it explains, when using <<EOF>> you cannot place it in a normal regular expression pattern. It can only be proceeded by a the name of a state. In your code you are using a state to indicate you are inside a comment. This state is called COMMENT_MULTI. All you have to do is put that in front of the <<EOF>> marker and give it an action to do:
<COMMENT_MULTI><<EOF>> {printf("Unterminated Comment: %s\n", yytext);
yyterminate();}
The special action function yyterminate() tells flex that you have recognised the <<EOF>> and that it marks the end-of-input for your program.
I have tested this, and it works in your code. (And with multi-line strings also).

Related

Regular expression in FLEX finding text

I got lex file with this rule:
%option noyywrap
%{
%}
LNA [^<>]
LNANA [^<>!]
%%
(<!!) fprintf(yyout, "begin_comment\t\t\t%s\n", yytext);
(!!>) fprintf(yyout, "end_comment\t\t\t%s\n", yytext);
({LNANA}*|({LNA}{LNANA})*|{LNA}+{LNANA}{LNANA}{LNA}) fprintf(yyout,
"string\t\t\t%s\n", yytext);
. fprintf(yyout, "illegal char %s\n", yytext);
%%
I need to find comments between "<!!" and "!!>" and strings in code wihout nothing
for example
<!! This is a comment that need to be found !!>
simple string that need to be found also
and this is my output:
as you can see this does not work as needed.
any help ?
I'm not sure exactly what you're trying for.
There's certainly a regular expression which matches an entire comment (as long as you don't intend comments to nest). But it's hard to get it right, and you typically end up splitting strings and returning more tokens than necessary. Here's one which I think works, although it's not fully tested. Since you need to match the entire comment, the pattern has to include the comment delimiters. Of course, you also have to match the strings between the comments, as well as doing something in the case that a comment is not correctly terminated.
<!!([^!]*!)([^!]+!)*!+([^!>][^!]*!([^!]+!)*!+)*> { /* Comment */ }
<!! { /* This pattern will match on unterminated comments */ }
[^<]+ { /* Non comment text (but maybe not the whole string) */ }
< { /* Also non-comment text */ }
A possibly clearer and probably slower version uses a start condition, and returns both the insides of comments and the rest of the text in single pieces (in yytext, as per the yylex interface).
%x IN_COMMENT
%%
<!! { BEGIN(IN_COMMENT);
yytext[yyleng -= 3] = 0;
if (yyleng) return STRING;
}
/* This patterns deliberately fails if it reaches the last input */
([^<]+|<)/(.|\n) { yymore(); }
/* The next pattern is to catch the last character in the input */
.|\n { return STRING; }
<IN_COMMENT>!!> { BEGIN(INITIAL);
yytext[yyleng -= 3] = 0;
return COMMENT;
}
<IN_COMMENT>[^!]+|! { yymore(); }
<IN_COMMENT><<EOF>> { fputs(stderr, "Unterminated comment\n"); }

Lex how to emulate modes or a stack of contexts

I'm trying to figure out how to emulate a context/mode or "stack of contexts" in lex (flex).
In particular, I'd like to write a parser that has a notion of string literals that can drop you back into an expression-y context.
I have a simple grammar that supports raw string literals using the syntax '...' and prints a string when it finds one.
However, a string token has potentially unbounded length (up to lex's maximum buffer size which I think is defined in some macro in the generated C source).
I want to define a begin_string token ' and an end_string token ' as well as a distinct token for reading a character while inside a string.
And I want to achieve this by having some notion of a context that says "now I'm in a string" and affects which tokenization rules are "active".
Here's the naive grammar below for context.
%{
#include <stdio.h>
%}
%option noyywrap
%%
'[^']*' { printf("found string literal (( %s ))\n", yytext); }
\n { /* do nothing */ }
. { /* do nothing */ }
%%
int main()
{
yylex();
return 0;
}
If I understand your needs correctly, that feature is provided with start conditions. As the manual explains, a start condition is a kind of state, which can be used to enable and disable a set of productions.
For example, you might have:
%option nodefault
%x IN_STRING
%%
/* Other patterns for regular tokens */
"'" { BEGIN(IN_STRING); return BEGIN_STRING; }
<IN_STRING>"'" { BEGIN(INITIAL); return END_STRING; }
<IN_STRING>.|\n { return STRING_CHAR; }
Flex will optionally enable a feature which allows you to push and pop the current start condition on a stack, but in this simple case that isn't necessary. If you do need to do that, remember to add %option stack to your prolog, and read the description of the API at the end of the Start Condition chapter linked above.

Is there an option for flex to match whole words only?

I'm writing a lexer and I'm using Flex to generate it based on custom rules.
I want to match identifiers of sorts that start with a letter and then can have either letters or numbers. So I wrote the following pattern for them:
[[:alpha:]][[:alnum:]]*
It works fine, the lexer that gets generated recognizes the pattern perfectly, although it doesn't only match whole words but all appearances of that pattern.
So for example it would match the input "Text" and "9Text" (discarding that initial 9).
Consider the following simple lexer that accepts IDs as described above:
%{
#include <stdio.h>
#define LINE_END 1
#define ID 2
%}
/* Flex options: */
%option noinput
%option nounput
%option noyywrap
%option yylineno
/* Definitions: */
WHITESPACE [ \t]
BLANK {WHITESPACE}+
NEW_LINE "\n"|"\r\n"
ID [[:alpha:]][[:alnum:]_]*
%%
{NEW_LINE} {printf("New line.\n"); return LINE_END;}
{BLANK} {/* Blanks are skipped */}
{ID} {printf("ID recognized: '%s'\n", yytext); return ID;}
. {fprintf(stderr, "ERROR: Invalid input in line %d: \"%s\"\n", yylineno, yytext);}
%%
int main(int argc, char **argv) {
while (yylex() != 0);
return 0;
}
When compiled and fed the following input produces the output below:
Input:
Test
9Test
Output:
Test
ID recognized: 'Test'
New line.
9Test
ERROR: Invalid input in line 2: "9"
ID recognized: 'Test'
New line.
Is there a way to make flex match only whole words (i.e. delimited by either blanks or custom delimiters like '(' ')' for example)?
Because I could write a rule that excludes IDs that start with numbers, but what about the ones that start with symbols like "$Test" or "&Test"? I don't think I can enumerate all of the possible symbols.
Following the example above, the desired output would be:
Test
ID recognized: 'Test'
New line.
9Test
ERROR: Invalid input 2: "9Test"
New line.
You seem to be asking two questions at once.
'Whole word' isn't a recognized construct in programming languages. The lexical and grammar are already defined. Just implement them.
The best way to handle illegal or unexpected characters in flex is not to handle them specially at all. Return them to the parser, just as you would for a special character. Then the parser can deal with it and attempt recovery via discarding.
Place this as you final rule:
. return yytext[0];
You can use this
Lets say you want to identify the reserved word for :
([\r\n\z]|" "|"")+"for"/([\r\n\z]|" ")+ {}
any new line character or generally a control character [\r\n\z]
or a white space " "
or the beginning of the line ""
for at least 1 time +
the word you want in quotes "for"
only followed by /
almost the same expression without the "" at least 1 time -> ([\r\n\z]|" ")+
With this code you can form your own matching pattern for whatever you need to do before and after the word.
I'm not sure if this is the best answer, but this works for me.
%x ERROR
%%
{NL} {
printf("New line.\n");
return LINE_END;
}
<INITIAL,ERROR>{BLANK} {
BEGIN(INITIAL);
}
{ID} {
printf("ID recognized: '%s'\n", yytext);
return ID;
}
<INITIAL,ERROR>. {
fprintf(stderr, "ERROR: Invalid input in line %d: \"%s\"\n", yylineno, yytext);
BEGIN(ERROR);
}
%%
Read this to learn more about starting conditions.
(My attempt at explaining what I've done)
Whenever this lexer hits something unexpected, it exclusively activates 2 sets of rules. To get out of the error set of rules, the lexer has to hit a 'blank'.

Checking for a blank line in C - Regex

Goal:
Find if a string contains a blank line. Whether it be '\n\n',
'\r\n\r\n', '\r\n\n', '\n\r\n'
Issues:
I don't think my current regex for finding '\n\n' is right. This is my first time really using regex outside of simple use of * when removing files in command line.
Is it possible to check for all of these cases (listed above) in one regex? or do I have to do 4 seperate calls to compile_regex?
Code:
int checkForBlankLine(char *reader) {
regex_t r;
compile_regex(&r, "*\n\n");
match_regex(&r, reader);
return 0;
}
void compile_regex(regex_t *r, char *matchText) {
int status;
regcomp(r, matchText, 0);
}
int match_regex(regex_t *r, char *reader) {
regmatch_t match[1];
int nomatch = regexec(r, reader, 1, match, 0);
if (nomatch) {
printf("No matches.\n");
} else {
printf("MATCH!\n");
}
return 0;
}
Notes:
I only need to worry about finding one blank line, that's why my regmatch_t match[1] is only one item long
reader is the char array containing the text I am checking for a blank line.
I have seen other examples and tried to base the code off of those examples, but I still seem to be missing something.
Thank you kindly for the help/advice.
If anything needs to be clarified please let me know.
It seems that you have to compile the regex as extended:
regcomp(&re, "\r?\n\r?\n", REG_EXTENDED);
The first atom, \r? is probably unnecessary, because it doesn't add to the blank-line condition if you don't capture the result.
In the above, blank line really means empty line. If you want blank line to mean a line that has no characters except for white space, you can use:
regcomp(&re, "\r?\n[ \t]*\r?\n", REG_EXTENDED);
(I don't think you can use the space character pattern, \s here instead of [ \t], because that would include carriage return and new-line.)
As others have already hinted at, the "simple use of * in the command line` is not a regular expression. This wildcard-matching is called file globbing and has different semantics.
Check what the * in a regex means. It's not like the wildcard "anything" in the command line. The * means that the previous component can appear any amount of times. The wildcard in regex is the .. So if you want to say match anything you can do .*, which would be anything, any amount of times.
So in your case you can do .*\n\n.* which would match anything that has \n\n.
Finally, you can use or in a regex and ( ) to group stuff. So you can do something like .*(\n\n|\r\n\r\n).* And that would match anything that has a \n\n or a \r\n\r\n.
Hope that helps.
Rather than looking for only \r or \n, look for not \r or \n?
Your regex would simply be
'[^\r\n]'
and a match result of false indicates a blank line to your specification.

Error Handing with Flex(lex) and Bison(yacc)

From the Bison Manual:
In a simple interactive command parser
where each input is one line, it may
be sufficient to allow yyparse to
return 1 on error and have the caller
ignore the rest of the input line when
that happens (and then call yyparse
again).
This is pretty much what I want, but I am having trouble getting to work. Basically, I want to detect and error in flex, and if an error is detected, have Bison discard the entire line. What I have right now, isn't working quite right because my commands still get executed:
kbsh: ls '/home
Error: Unterminated Single Quote
admin kbrandt tempuser
syntax error
kbsh:
In my Bison file:
commands:
/*Empty*/ { prompt(); } |
command { prompt(); }
;
command:
error {return 1; } |
chdir_command |
pwd_command |
exit_command |
WORD arg_list {
execute_command($1, $2);
//printf("%s, %s\n", $1, $2);
} |
WORD { execute_command($1, NULL); }
;
And in my Flex:
' {BEGIN inQuote; }
<inQuote>\n {printf("Error: Unterminated Single Quote\n"); BEGIN(0); return(ERROR);}
I don't think you'll find a simple solution to handling these types of parsing errors in the lexer.
I would keep the lexer (flex/lex) as dumb as possible, it should just provide a stream of basic tokens (identifiers, keywords, etc...) and have the parser (yacc/bison) do the error detection. In fact it is setup for exactly what you want, with a little restructuring of your approach...
In the lexer (parser.l), keep it simple (no eol/newline handling), something like (isn't full thing):
}%
/* I don't recall if the backslashify is required below */
SINGLE_QUOTE_STRING \'.*\'
DOUBLE_QUOTE_STRING \".*\"
%%
{SINGLE_QUOTE_STRING} {
yylval.charstr = copy_to_tmp_buffer(yytext); // implies a %union
return STRING;
}
{DOUBLE_QUOTE_STRING} {
yylval.charstr = copy_to_tmp_buffer(yytext); // implies a %union
return STRING;
}
\n return NEWLINE;
Then in your parser.y file do all the real handling (isn't full thing):
command:
error NEWLINE
{ yyclearin; yyerrorok; print_the_next_command_prompt(); }
| chdir_command STRING NEWLINE
{ do_the_chdir($<charstr>2); print_the_next_command_prompt(); }
| ... and so on ...
There are two things to note here:
The shift of things like NEWLINE to the yacc side so that you can determine when the user is done with the command then you can clear things out and start over (assuming you have "int yywrap() {return 1;}" somewhere). If you try to detect it too early in flex, when do you know to raise an error?
chdir isn't one command (unless it was sub ruled and you just didn't show it), it now has chdir_command STRING (the argument to the chdir). This makes it so that the parser can figure out what went wrong, you can then yyerror if that directory doesn't exist, etc...
This way you should get something like (guessing what chdir might look like):
cd 'some_directory
syntax error
cd 'some_directory'
you are in the some_directory dude!
And it is all handled by the yacc grammer, not by the tokenizer.
I have found that keeping flex as simple as possible gives you the most ***flex***ibility. :)

Resources