I've written a Lexer in C, it currently lexes files in ASCII successfully, however I'm confused as to how I would lex unicode. What unicode would I need to lex, for instance should I support utf-8, utf-16, etc. What do languages like Rust or Go support?
If so are there any libraries that can help me out, although I would prefer to try and do it myself so I can learn. Even then, a small library that I could read to learn from would be great.
There are already version of lex (and other lexer tools that support UniCode) and they are tabulated on the WikiPedia Page: List of Lexer Generators. There is also a list of lexer tools on the Wikipedia Parser Page. In summary, the following tools handle UniCode:
JavaCC - JavaCC generates lexical analyzers written in Java.
JFLex - A lexical analyzer generator for Java.
Quex - A fast universal lexical analyzer generator for C and C++.
FsLex - A lexer generator for byte and Unicode character input for F#
And, of course, there are the techniques used by W3.org and cited by #jim mcnamara at http://www.w3.org/2005/03/23-lex-U.
You say you have written your own lexer in C, but you have used the tag lex for the tool called lex; perhaps that was an oversight?
In the comments you say you have not used regular expressions, but also want to learn. Learning something about the theory of language recognition is key to writing an efficient and working lexer. The symbols being recognised are classified as a Chomsky Type 3 Language, or a Regular Language, which can be described by Regular Expressions. Regular Expressions can be implemented by coding that implements a Finite State Automata (or Finite State Machine). The standard implementation for a finite state machine is coded by a loop containing a switch. Most experienced coders should know, and be able to recognise and exploit this form:
while ( not <<EOF>> ) {
switch ( input_symbol ) {
case ( state_symbol[0] ) :
...
case ( state_symbol[1] ) :
...
default:
....
}
}
If you had coded in this style, the same coding could simply work whether the symbols being handled were 8 bit or 16 bit, as the algorithmic coding pattern remains the same.
Ad-Hoc coding of a lexical analyser without an understanding of the underlying theory and practice will eventually have its limits. I think you will find it beneficial to read a little more into this area.
Related
For example, if the input is x+=5, the program should return an array of x, +=, 5. Notice that there is no space between x and +=, so splitting by spaces only probably won't work, because then you would have to iterate through it all over again to find the keywords.
How would I do something like this?
Is there an efficient way to do this in C?
Lexing is not specific to C (in the sense that you'll use similar techniques in other programming languages). You could do that with hand-written code (using finite automaton coding techniques). You could use a lexer generator like flex. You might even use regexprs, e.g. regex.h functions on POSIX systems.
Parsing is also a well known domain with standard techniques (at least for context free languages, if you want some efficiency). You could use recursive descent parsing, you could generate a parser using bison (which has examples very close to your homework) or ANTLR. Read more about LL parsing & LR parsing. BTW, parsing techniques can be used for lexing.
BTW, there are tons of free software (e.g. interpreters of scripting languages like Guile, Lua, Python, etc....), JSON, YAML, XML... parsers, several compilers (e.g. tinycc) etc... illustrating these techniques. You'll learn a lot by studying their source code.
It could be easier for your to sometimes have a lookahead of one or two characters, e.g. by first reading the entire line (with getline(3) or else fgets(3), and perhaps even readline, which gives you a line editor). If you cannot read a whole line consider using fgetc(3) and ungetc when needed. The classifying utilities from <ctype.h> like isalpha might be helpful.
If you care about UTF-8 (and in principle you should) things become slightly more complex since some Unicode characters (like €, é, 𝛃, ...) are represented in UTF-8 by several bytes. A library like libunistring should be very helpful.
I am doing static analyze on c program.And I search the antlr website ,there seems to be no appropriate grammar file that produce ast for c program.Does it mean I have to do it myself from the very start.Or is there a quicker method.I also need a tree parser that can traverse the ast created by the parser.
You indicated you want to do static analysis to detect buffer overflow.
First, writing a grammar for C is harder than it looks. There's all that stuff in the standard, and then there's what the real compilers actually accept. And you have to decide what to do about the preprocessor (and it varies from compiler to compiler!). If you don't get the grammar and preprocessing exactly right, you won't be able to parse real programs. (If you want to do toy languages, that's fine, but then you don't need a C grammar).
To do the analysis, you'll need far more machinery than an AST. You'll need symbol tables, control and data flow analysis, likely local and global points-to analysis, call graph extraction, and some type of range analysis.
People just don't seem to understand this.
** GETTING A PARSER IS A LONG WAY FROM DOING ANYTHING USEFUL WITH REAL LANGUAGES **
I'm shouting because I see this over, and over, and over.
If you want to get on with a specific program analysis or transformation task, unless you want to die of old age before you start your task, you better find a foundation that has most of what you need already. A foundation on a parser generator with a creaky grammar is not a foundation. (Don't get me wrong: ANTLR, YACC, JavaCC are all fine parser generators, and they're great for building a parser for a new language. They're great for implementing production parsers for real langauges when the investment gets made. But they produce parsers, and mostly people don't do the production part. And they don't provide the additional machinery by a long shot.)
Our DMS Software Reengineering Toolkit contains all the above machinery because it is almost always needed, and it is a royal headache to implement. (My team has 15 years invested so far.)
We've also instantiated that machinery is forms specifically useful for COBOL and Java, C, C++ (to somewhat lesser extent, the language is really hard), in a variety of dialects, so that others don't have to repeat this long process.
GCC and Clang are pretty mature for C and C++ as alternatives.
The hardest part is writing the grammar. Mixing in rewrite rules to create an AST isn't that hard, and creating a tree grammar from a parser grammar that emits an AST isn't that hard too (compared to writing the parser grammar, that is).
Here's a previous Q&A that shows how to create a proper AST: How to output the AST built using ANTLR?
And I couldn't find a decent SO-Q&A that explains how to go about creating a tree grammar, so here's a link to my personal blog that explains this: http://bkiers.blogspot.com/2011/03/6-creating-tree-grammar.html
Good luck.
Is there any C grammar available which generates the AST, which includes all the parser rules using "^" and "!" notations?
I went through the book written by Terence Parr, to write such a grammar, but it seems that writing one such grammar for C lang is a time consuming process, so was wondering if its available already which can me save a lot of time!
(A grammar for a smaller subset of C language is also fine..)
Thanks :)
See this. It's straight from the ANTLR 4 source repo: a C11 grammar. It looks pretty compliant.
Of course, it doesn't come with a preprocessor, but handing cpp or mcpp the file first is easy enough.
It also doesn't come with AST rules, but it doesn't look too hard to do (albeit time consuming).
No answers after two weeks.
You are right, building a full parser that builds complete ASTs and handles all the details of C (including preprocessor)
covering a variety of dialects of C (e.g., ANSI, GNU C 2/3/4/, Miscrosoft Visual C, Green Hills C)... is actually a lot of work. And unless you invest this work, it won't process any real C programs.
I would expect there to be a full ANTLR grammar for C that did this considering how old ANTLR is. It is surprising that nobody here can seem to identify one; certainly you'd expect to find it at the ANTLR site.
We've put the energy required into building such C parsers (covering all the above dialects), and added computing symbol tables, extracting control and data flows, building call graphs, enabling analyzers, and tree transformations in the DMS Software Reengineering Toolkit with its C front end. This front end has been applied to C applications comprised of 18,000 compilation units to build custom analysis tools.
I need to parse algebraic expressions for an application I'm working on and am hoping to garnish a bit of collective wisdom before taking a crack at it and, possibly, heading down the wrong road.
What I need to do is pretty straight forward: given a textual algebraic expression (3*x - 4(y - sin(pi))) create a object representation of the equation. The custom objects already exist, so I need a parser that creates a tree I can walk to instantiate the objects I need.
The basic requirements would be:
Ability to express the algebra as a grammar so I have control and can customize/extend it as necessary.
The initial syntax will include integers, real numbers, constants, variables, arithmetic operators (+, - , *, /), powers (^), equations (=), parenthesis, precedence, and simple functions (sin(pi)). I'm hoping to extend my app fairly quickly to support functions proper (f(x) = 3x +2).
Must compile in C as it needs to be integrated into my code.
I DON'T need to evaluate the expression mathematically, so software that solves for a variable or performs the arithmetic is noise.
I've done my Google homework and it looks like the best approach is to use a BNF grammar and software to generate a compiler in C. So my questions:
Does a BNF grammar with corresponding parser generator for algebraic expressions (or better yet, LaTex) already exist? Someone has to have done this already. I REALLY want to avoid rolling my own, mainly because I don't want to test it. I'd be willing to pay a reasonable amount for a library (under $50)
If not, which parser generator for C do you think is the easiest to learn/use here? Lex? YACC? Flex, Bison, Python/SymPy, Others? I'm not familiar with any of these.
The standard Linux tools flex and bison would probably be most appropriate here. IIRC the sample parsers and lexers used in these tools do something close to what you want, so you might be able to just modify that code to get what you need.
These tools seem like they meet your specifications. You can customize the grammars, compile down to C, and use any operator you want.
I've had very good luck with ANTLR. It has runtimes for many different languages, including C, and has a very nice syntax for specifying grammars and building trees. I recently wrote a similar grammar (algebraic expressions) in 131 lines, which is definitely manageable.
I used the code (found on the net) from the following:
Program Translation Fundamentals" by Peter Calingaert
I enhanced it to handle functions, which lets you implement things like "if(a, b, c)" (kind of like how Excel does things).
you can build simple parser yourself or use any of popular "compiler-compiler" (some of them were listed by other posts). just decide if your parser will be complicated enough to use (and learn) an external tool. in any case you'll need to define the grammar, usually it's the most brain intensive task if you don't have prior experience. the formal way to define syntactic grammars is BNF or EBNF
I'm building my own language using Flex, but I want to know some things:
Why should I use lexical analyzers?
Are they going to help me in something?
Are they obligatory?
Lexical analysis helps simplify parsing because the lexemes can be treated as abstract entities rather than concrete character sequences.
You'll need more than flex to build your language, though: Lexical analysis is just the first step.
Any time you are converting an input string into space-separated strings and/or numeric values, you are performing lexical analysis. Writing a cascading series of else if (strcmp (..)==0) ... statements counts as lexical analysis. Even such nasty tools as sscanf and strtok are lexical analysis tools.
You'd want to use a tool like flex instead of one of the above for one of several reasons:
The error handling can be made much better.
You can be much more flexible in what different things you recognize with flex. For instance, it is tough to parse a C-format hexidecimal value properly with scanf routines. scanf pretty much has to know the hex value is comming. Lex can figure it out for you.
Lex scanners are faster. If you are parsing a lot of files, and/or large ones, this could become important.
You would consider using a lexical analyzer because you could use BNF (or EBNF) to describe your language (the grammar) declaratively, and then just use a parser to parse a program written in your language and get it in a structure in memory and then manipulate it freely.
It's not obligatory and you can of course write your own, but that depends on how complex the language is and how much time you have to reinvent the wheel.
Also, the fact that you can use a language (BNF) to describe your language without changing the lexical analyzer itself, enables you to make many experiments and change the grammar of your language until you have exactly what it works for you.