Why is buffering used in lexical analysis?and what is best value for EOF?
EOF is typically defined as (-1).
In my time I have made quite a number of parsers using lex/yacc, flex/bison and even a hand-written lexical analyser and a LL(1) parser. 'Buffering' is rather vague and could mean multiple things (input characters or output tokens) but I can imagine that the lexical analyzer has an input buffer where it can look ahead. When analyzing 'for (foo=0;foo<10;foo++)', the token for the keyword 'for' is produced once the space following it is seen. The token for the first identifier 'foo' is produced once it sees the character '='. It will want to pass the name of the identifier to the parser and therefore needs a buffer so the word 'foo' is still somewhere in memory when the token is produced.
Speed of lexical analysis is a concern.
Also, need to check several ahead characters in order to find a match.
Lexical analyzer scans a input string character by character,from left to right and those input character thus read from hard-disk or secondary storage.That can requires a lot of system calls according to the size of program and can make the system slow.That's why we use input buffering technique.
Input buffer is a location that holds all the incoming information before it continues to CPU for processing.
you can also know more information from here:
https://www.geeksforgeeks.org/input-buffering-in-compiler-design/
Related
For example, when I do scanf("%s",arg); : Terminal allows me to input text until a newline is encountered but it only stores up to the first space character inside the arg variable. Rest remains in buffer.
scanf("%c", arg); : In this case also it allows me to enter text into the terminal till I give a newline character, but only one is stored in arg while the rest remains in buffer.
scanf("%[^P]", arg); : In this case, I can enter text into the terminal even after giving it a newline character until I hit a line with 'P' in it and press enter key (newline character) and then transfers everything to the input buffer.
How is it determined how much data from the input stream is to be transferred to the input buffer at a time?
Assuming that arg is of the proper type.
My understanding seems to be fundamentally wrong here. If someone can please explain this stuff, I will be very grateful.
How is it determined? It's determined by the format string itself.
The scanf function will read items until they no longer match the format specifier given. Then it stops, leaving the first "non-compliant" character still in the buffer.
If you mean "how is it handled under the covers?", that's a different issue.
My first response to that is "it doesn't matter". The ISO standard mandates how the language works, and it describes a "virtual machine" capable of doing that. Provided you follow the rules of the machine, you don't need to worry about how things happen under the covers.
My second answer is probably more satisfying but is very implementation dependent.
For efficiency, the underlying software will probably not deliver any data to the implementation until it has a full line (though this of course is likely to be configurable, such as setting raw mode for the terminal). That means things like backspace may change the characters already entered rather than being inserted into the stream.
It may (such as with the GNU readline() library allow all sorts of really fancy editing on the line before delivering the characters. There's nothing to stop the underlying software from even opening up a vim session to allow you to enter data, and only deliver it once you exit :-)
the buffer and primitive editing features are provided by the operating system.
if you can set the terminal into "raw mode" you will see different behavior.
eg: characters may be available to read before enter is pressed especially if the buffer can also be disabled.
I think, it is not related with how much, rather, what the format specifier tells.
As per C99, chapter 7.19.6.2, paragraph 2, (for fscanf())
The fscanf function reads input from the stream pointed to by stream, under control
of the string pointed to by format that specifies the admissible input sequences and how
they are to be converted for assignment, using subsequent arguments as pointers to the
objects to receive the converted input.
And for the format specifiers, you need to refer to paragraph 12.
How can I modify my lex or yacc files to output the same input in a file? I read the statements from a file, I want to add some invariant for special statements and add it to input file and then continue statements. For example I read this file:
char mem(d);
int fun(a,b);
char a ;
The output should be like:
char mem(d);
int fun(a,b);
invariant(a>b) ;
char a;
I can't do this. I can only write the new statements to output file.
It's useful to understand why this is a non-trivial question.
The goal is to
Copy the entire input to the output; and
Insert some extra information produced while parsing.
The problem is that the first of those needs to be done by the scanner (lexer), because the scanner doesn't usually pass every character through to the parser. It usually drops whitespace, comments, at least. And it may do other things, like convert numbers to their binary representation, losing the original textual representation.
But the second one obviously needs to be done by the parser, obviously. And here is the problem: the parser is (almost) always one token behind the scanner, because it needs the lookahead token to decide whether or not to reduce. Consequently, by the time a reduction action gets executed, the scanner will already have processed all the input data up to the end of the next token. If the scanner is echoing input to output, the place where the parser wants to insert data has already been output.
Two approaches suggest themselves.
First, the scanner could pass all of the input to the parser, by attaching extra data to every token. (For example, it could attach all whitespace and comments to the following token.) That's often used for syntax coloring and reformatting applications, but it can be awkward to get the tokens output in the right order, since reduction actions are effectively executed in a post-order walk.
Second, the scanner could just remember where every token is in the input file, and the parser could attach notes (such as additional output) to token locations. Then the input file could be read again and merged with the notes. Unfortunately, that requires that the input be rewindable, which would preclude parsing from a pipe, for example; a more general solution would be to copy the input into a temporary file, or even just keep it in memory if you don't expect it to be too huge.
Since you can already output your own statements, your problem is how to write out the input as it is being read in. In lex, the value of each token being read is available in the variable yytext, so just write it out for every token you read. Depending on how your lexer is written, this could be used to echo whitespace as well.
I have an homework to do for my school. The goal is to create a really basic virtual machine as well as a simple assembler. I had no problem creating the virtual machine but I can't think of a 'nice' way to create the assembler.
The grammar of this assembler is really basic: an optional label followed by a colon, then a mnemonic followed by 1, 2 or 3 operands. If there is more than one operand they shall be separated by commas. Also, whitespaces are ignored as long as they don't occur in the middle of a word.
I'm sure I can do this with strtok() and some black magic, but I'd prefer to do it in a 'clean' way. I've heard about Parse Trees/AST, but I don't know how to translate my assembly code into these kinds of structures.
I wrote an assembler like this when I was a teenager. You don't need a complicated parser at all.
All you need to do is five steps for each line:
Tokenize (i.e. split the line into tokens). This will give you an array of tokens and then you don't need to worry about the whitespace, because you will have removed it during tokenization.
Initialize some variables representing parts of the line to NULL.
A sequence of if statements to walk over the token array and check which parts of the line are present. If they are present put the token (or a processed version of it) in the corresponding variable, otherwise leave that variable as NULL (i.e. do nothing).
Report any syntax errors (i.e. combinations of types of tokens that are not allowed).
Code generation - I guess you know how to do this part!
What you're looking for is actually lexical analyses, parsing en finally the generation of the compiled code. There are a lot of frameworks out there which helps creating/generating a parser like Gold Parser or ANTLR. Creating a language definition (and learning how to depending on the framework you use) is most often quite a lot of work.
I think you're best off with implementing the shunting yard algorithm. Which converts your source into a representation computers understand, which makes it easy to understand for your virtual machine.
I also want to say that diving into parsers, abstract syntax trees, all the tools available on the web and reading a lot of papers about this subject is a really good learning experience!
You can take a look at some already-made assemblers, like PASMO: an assmbler for Z80 CPU, and get ideas from it. Here it is:
http://pasmo.speccy.org/
I've written a couple of very simple assemblers, both of them using string manipulation with strtok() and the like. For a simple grammar like the assembly language is, it's enough. Key pieces of my assemblers are:
A symbol table: just an array of structs, with the name of a symbol and its value.
typedef struct
{
char nombre[256];
u8 valor;
} TSymbol;
TSymbol tablasim[MAXTABLA];
int maxsim = 0;
A symbol is just a name that have associated a value. This value can be the current position (the address where the next instruction will be assembled), or it can be an explicit value assigned by the EQU pseudoinstruction.
Symbol names in this implementation are limited to 255 characters each, and one source file is limited to MAXTABLA symbols.
I perform two passes to the source code:
The first one is to identify symbols and store them in the symbol table, detecting whether they are followed by an EQU instruction or not. If there is such, the value next to EQU is parsed and assigned to the symbol. In other case, the value of the current position is assigned. To update the current position I have to detect if there is a valid instruction (although I do not assemble it yet) and update it acordingly (this is easy for me because my CPU has a fixed instruction size).
Here you have a sample of my code that is in charge of updating the symbol table with a value from EQU of the current position, and advancing the current position if needed.
case 1:
if (es_equ (token))
{
token = strtok (NULL, "\n");
tablasim[maxsim].valor = parse_numero (token, &err);
if (err)
{
if (err==1)
fprintf (stderr, "Error de sintaxis en linea %d\n", nlinea);
else if (err==2)
fprintf (stderr, "Simbolo [%s] no encontrado en linea %d\n", token, nlinea);
estado = 2;
}
else
{
maxsim++;
token = NULL;
estado = 0;
}
}
else
{
tablasim[maxsim].valor = pcounter;
maxsim++;
if (es_instruccion (token))
pcounter++;
token = NULL;
estado = 0;
}
break;
The second pass is where I actually assemble instructions, replacing a symbol with its value when I find one. It's rather simple, using strtok() to split a line into its components, and using strncasecmp() to compare what I find with instruction mnemonics
If the operands can be expressions, like "1 << (x + 5)", you will need to write a parser. If not, the parser is so simple that you do not need to think in those terms. For each line get the first string (skipping whitespace). Does the string end with a colon? then it is a label, else it is the menmonic. etc.
For an assembler there's little need to build an explicit parse tree. Some assemblers do have fancy linkers capable of resolving complicated expressions at link-time time but for a basic assembler an ad-hoc lexer and parsers should do fine.
In essence you write a little lexer which consumes the input file character-by-character and classifies everything into simple tokens, e.g. numbers, labels, opcodes and special characters.
I'd suggest writing a BNF grammar even if you're not using a code generator. This specification may then be translated into a recursive-decent parser almost by-wrote. The parser simply walks through the whole code and emits assembled binary code along the way.
A symbol table registering every label and its value is also needed, traditionally implemented as a hash table. Initially when encountering an unknown label (say for a forward branch) you may not yet know the value however. So it is simply filed away for future reference.
The trick is then to spit out dummy values for labels and expressions the first time around but compute the label addresses as the program counter is incremented, then take a second pass through the entire file to fill in the real values.
For a simple assembler, e.g. no linker or macro facilities and a simple instruction set, you can get by with perhaps a thousand or so lines of code. Much of it brainless through-free hand translation from syntax descriptions and opcode tables.
Oh, and I strongly recommend that you check out the dragon book from your local university library as soon as possible.
At least in my experience, normal lexer/parser generators (e.g., flex, bison/byacc) are all but useless for this task.
When I've done it, nearly the entire thing has been heavily table driven -- typically one table of mnemonics, and for each of those a set of indices into a table of instruction formats, specifying which formats are possible for that instruction. Depending on the situation, it can make sense to do that on a per-operand rather than a per-instruction basis (e.g., for mov instructions that have a fairly large set of possible formats for both the source and the destination).
In a typical case, you'll have to look at the format(s) of the operand(s) to determine the instruction format for a particular instruction. For a fairly typical example, a format of #x might indicate an immediate value, x a direct address, and #x an indirect address. Another common form for an indirect address is (x) or [x], but for your first assembler I'd try to stick to a format that specifies instruction format/addressing mode based only on the first character of the operand, if possible.
Parsing labels is simpler, and (mostly) separate. Basically, each label is just a name with an address.
As an aside, if possible I'd probably follow the typical format of a label ending with a colon (":") instead of a semicolon (";"). Much more often, a semicolon will mark the beginning of a comment.
I am trying to hand code a tokenizer. I keep on reading the characters which can be part of a token. For example an integer can only contain digits. So in the below text I keep on reading the characters until I find a non-digit character. So I get 123 as the token. Next I get ( as the token, and then abc as identifier. This is fine as ( is a delimiter.
123(abc
However, in the below text I get 123 as integer and then abc as identifier. But actually this in not valid since there is no delimiter between them.
123abc(
Should the tokenizer check for delimiters and report an error? If yes what tokens should be returned and where should the tokenizer continue reading from after an invalid token is found?
Or should the tokenizer simply return 123 as integer and abc as identifier and let the parser detect the errors?
Usually, the tokenizer (or lexer) performs no checking of valid syntax.
The role of a lexer is to split the input into tokens, which can then be transformed into a syntax tree by the parser. Therefore, it'd usually be the job of the parser to perform such a check.
This is somewhat of a gray area, but most hand-coded lexers just do the tokenizing, and let the parser decide whether the stream of tokens make sense.
If "123abc" is an invalid token then you should handle it as soon as you spot it since it's directly related to the way tokens are defined, not how they interact with each other (which would be the lexer's job). It's an orthographic error rather than a grammar-related one.
There are multiple ways to go about it:
You could abort the parsing and just throw some exception, leaving the caller with no tokens or just the tokens you had successfully parsed until then. This will save you any "recovery" logic and might be enough for your use case. Although, if you're parsing stuff for syntax highlighting for instance, this would probably not be sufficient as you don't want all of the remaining code to look unparsed.
Example: A conforming XML parser could use this for fatal errors if there's no need to handle malformed markup, just spit out a basic error and quit.
Alternatively, you could insert an "error" token with proper metadata about the nature of the error and skip ahead to the next valid token.
You might need to have heuristics in your lexer to handle the error token gracefully and find how to interpret further tokens when an error token is found inside an nested expression (like, should you consider the expression has ended? look for a closing token? etc.).
Anyway, this approach will allow for error tokens to be used to display precise info about the location and nature of errors encountered (think inline error reporting in a GUI).
You might consider generating your tokenizer or lexer. Tools like Flex or ANTLR should help. And you might also generate your parser with ANTLR or Bison
If you insist on hand-coding your lexer (and your parser), having some look-ahead is extremely helpful in practice. For instance, you could read your input line by line and tokenize inside the current line (with the ability to inspect the next few characters).
I feel like this is a pretty common problem but I wasn't really sure what to search for.
I have a large file (so I don't want to load it all into memory) that I need to parse control strings out of and then stream that data to another computer. I'm currently reading in the file in 1000 byte chunks.
So for example if I have a string that contains ASCII codes escaped with ('$' some number of digits ';') and the data looked like this... "quick $33;brown $126;fox $a $12a". The string going to the other computer would be "quick brown! ~fox $a $12a".
In my current approach I have the following problems:
What happens when the control strings falls on a buffer boundary?
If the string is '$' followed by anything but digits and a ';' I want to ignore it. So I need to read ahead until the full control string is found.
I'm writing this in straight C so I don't have streams to help me.
Would an alternating double buffer approach work and if so how does one manage the current locations etc.
If I've followed what you are asking about it is called lexical analysis or tokenization or regular expressions. For regular languages you can construct a finite state machine which will recognize your input. In practice you can use a tool that understands regular expressions to recognize and perform different actions for the input.
Depending on different requirements you might go about this differently. For more complicated languages you might want to use a tool like lex to help you generate an input processor, but for this, as I understand it, you can use a much more simple approach, after we fix your buffer problem.
You should use a circular buffer for your input, so that indexing off the end wraps around to the front again. Whenever half of the data that the buffer can hold has been processed you should do another read to refill that. Your buffer size should be at least twice as large as the largest "word" you need to recognize. The indexing into this buffer will use the modulus (remainder) operator % to perform the wrapping (if you choose a buffer size that is a power of 2, such as 4096, then you can use bitwise & instead).
Now you just look at the characters until you read a $, output what you've looked at up until that point, and then knowing that you are in a different state because you saw a $ you look at more characters until you see another character that ends the current state (the ;) and perform some other action on the data that you had read in. How to handle the case where the $ is seen without a well formatted number followed by an ; wasn't entirely clear in your question -- what to do if there are a million numbers before you see ;, for instance.
The regular expressions would be:
[^$]
Any non-dollar sign character. This could be augmented with a closure ([^$]* or [^$]+) to recognize a string of non$ characters at a time, but that could get very long.
$[0-9]{1,3};
This would recognize a dollar sign followed by up 1 to 3 digits followed by a semicolon.
[$]
This would recognize just a dollar sign. It is in the brackets because $ is special in many regular expression representations when it is at the end of a symbol (which it is in this case) and means "match only if at the end of line".
Anyway, in this case it would recognize a dollar sign in the case where it is not recognized by the other, longer, pattern that recognizes dollar signs.
In lex you might have
[^$]{1,1024} { write_string(yytext); }
$[0-9]{1,3}; { write_char(atoi(yytext)); }
[$] { write_char(*yytext); }
and it would generate a .c file that will function as a filter similar to what you are asking for. You will need to read up a little more on how to use lex though.
The "f" family of functions in <stdio.h> can take care of the streaming for you. Specifically, you're looking for fopen(), fgets(), fread(), etc.
Nategoose's answer about using lex (and I'll add yacc, depending on the complexity of your input) is also worth considering. They generate lexers and parsers that work, and after you've used them you'll never write one by hand again.