Bison - additional parameter to a push and pure parser - c

How can I pass one aditional parameter (not the token minor of type YYSTYPE) to the yypush_parse() function?
The parser is indeed reentrant, but this one aditional variable is crucial for the thread-safety of the application I need to integrate my parser in (it's a PHP extension, so we're talking about TSRM).
I cannot just get rid of that parameter because inside the action code I'm going to call functions which will generate an AST in a userland-accessible form.
I've tried to hack around YYPUSH_DECLS and it works as far as declaring the function is concerned, BUT a few thousand LOCs down comes the implementation of yypush_parse, and I can't see any way to overwrite the function signature where the implementation of yypush_parse starts.
YYPARSE_PARAM is only used when the parser is not a push one (as far as I can tell), but in my case I NEED it be push because of the things I have to do in the processing loop, after lexing and prior to adding a new token to the parsing stack.
So I am wondering if there's a %directive or something that may help.
On the other side, I really think YYPARSE_PARAM should be used as far as it's defined, no matter what type of parser it is. It's a pity it's not.

%parse-param. YYPARSE_PARAM is deprecated and shouldn't be used.

Related

Can I detect the existence of a local variable inside a macro?

Is there a way to safely check to see whether some named variable (let's call it "foo") is present in the current scope? I'd like to have a macro that, say, makes use of "foo" if it's present, otherwise does something else. Are there any runtime tricks I can make use of here?
(The actual context is trying to solve this problem, but I realized that one could be a special case of this one, so a separate question seemed also interesting.)
Unfortunately, no. The compiler is responsible for parsing variable names and assigning scopes to them, and the preprocessor runs before the compiler. So it has no access to that information.

Parsing C files without preprocessing it

I want to run simple analysis on C files (such as if you call foo macro with INT_TYPE as argument, then cast the response to int*), I do not want to prerprocess the file, I just want to parse it (so that, for instance, I'll have correct line numbers).
Ie, I want to get from
#include <a.h>
#define FOO(f)
int f() {FOO(1);}
an list of tokens like
<include_directive value="a.h"/>
<macro name="FOO"><param name="f"/><result/></macro>
<function name="f">
<return>int</return>
<body>
<macro_call name="FOO"><param>1</param></macro_call>
</body>
</function>
with no need to set include path, etc.
Is there any preexisting parser that does it? All parsers I know assume C is preprocessed. I want to have access to the macros and actual include instructions.
Our C Front End can parse code containing preprocesser elements can do this to fair extent and still build a usable AST. (Yes, the parse tree has precise file/line/column number information).
There are a number of restrictions, which allows it to handle most code. In those few cases it cannot handle, often a small, easy change to the source file giving equivalent code solves the problem.
Here's a rough set of rules and restrictions:
#includes and #defines can occur wherever a declaration or statement can occur, but not in the middle of a statement. These rarely cause a problem.
macro calls can occur where function calls occur in expressions, or can appear without semicolon in place of statements. Macro calls that span non-well-formed chunks are not handled well (anybody surprised?). The latter occur occasionally but not rarely and need manual revision. OP's example of "j(v,oid)*" is problematic, but this is really rare in code.
#if ... #endif must be wrapped around major language concepts (nonterminals) (constant, expression, statement, declaration, function) or sequences of such entities, or around certain non-well-formed but commonly occurring idioms, such as if (exp) {. Each arm of the conditional must contain the same kind of syntactic construct as the other arms. #if wrapped around random text used as bad kind of comment is problematic, but easily fixed in the source by making a real comment. Where these conditions are not met, you need to modify the original source code, often by moving the #if #elsif #else #end a few tokens.
In our experience, one can revise a code base of 50,000 lines in a few hours to get around these issues. While that seems annoying (and it is), the alternative is to not be able to parse the source code at all, which is far worse than annoying.
You also want more than just a parser. See Life After Parsing, to know what happens after you succeed in getting a parse tree. We've done some additional work in building symbol tables in which the declarations are recorded with the preprocessor context in which they are embedded, enabling type checking to include the preprocessor conditions.
You can have a look at this ANTLR grammar. You will have to add rules for preprocessor tokens, though.
Your specific example can be handled by writing your own parsing and ignore macro expansion.
Because FOO(1) itself can be interpreted as a function call.
When more cases are considered however, the parser is much more difficult. You can refer PDF Link to find more information.

emacs regexp replace C function call

I'm trying to regexp match a C function, e.g.
func(blah blah);
The match can include newlines.
I've tried:
func([.+]);
which didn't do newlines, and:
func([...]);
func([^...]);
neither of which seemed to do anything. I guess I'm looking for the part of a regexp that will match any number/type of characters between my opening func( and );.
You could try func[[:space:]]*([^)]*). Nested parens in calls will confuse it though.
I think that the general case is not feasible with regular expressions, because the nested function calls are not a regular language.
While Maxim's answer is specific, I'm going to guess you are looking to do something with the matched function you found. To do serious code processing, you can't beat the semantic parser that is a part of CEDET's suite of tools. http://cedet.sf.net is also part of Emacs.
If you use the semantic parser in emacs, you can:
M-x semantic-mode RET
and then in code:
(semantic-fetch-tags)
or
(semantic-current-tag)
to get the current tag. Once you have the tag, you can call:
(semantic-tag-function-arguments mytag)
to get the arguments, which are tags. For one of those, use semantic-tag-name to get the name, or semantic-tag-type to get the data type.
Once you've got your tag data, you can always write out new code with SRecode, which is a code generator which will take in tags, and spit out code, such as function declarations.

Bison passing back resulting AST

In lemon I was able to use the third parameter of the parsing function to pass back the result to the caller when the starting symbol was reduced.
How would I do the same in bison? Is it enough to assign that value to $$ within the starting symbol's action code, and from the caller to take it as the "yy minor" value, after the final call to yypush_parse()?
The parser is push and pure. Thread-safety is a must.
You'll pretty much have to do-it-yourself with bison/yacc if you want an AST, by creating your own nodes and assigning them to $$.
The example at http://epaperpress.com/lexandyacc/ (look at the .y file in Calculator->Yacc input) or http://www.progtools.org/compilers/tutorials/cxx_and_bison/cxx_and_bison.html might give you ideas on how to do that.

Why are nested functions not supported by the C standard?

It doesn't seem like it would be too hard to implement in assembly.
gcc also has a flag (-fnested-functions) to enable their use.
It turns out they're not actually all that easy to implement properly.
Should an internal function have access to the containing scope's variables?
If not, there's no point in nesting it; just make it static (to limit visibility to the translation unit it's in) and add a comment saying "This is a helper function used only by myfunc()".
If you want access to the containing scope's variables, though, you're basically forcing it to generate closures (the alternative is restricting what you can do with nested functions enough to make them useless).
I think GCC actually handles this by generating (at runtime) a unique thunk for every invocation of the containing function, that sets up a context pointer and then calls the nested function. This ends up being a rather Icky hack, and something that some perfectly reasonable implementations can't do (for example, on a system that forbids execution of writable memory - which a lot of modern OSs do for security reasons).
The only reasonable way to make it work in general is to force all function pointers to carry around a hidden context argument, and all functions to accept it (because in the general case you don't know when you call it whether it's a closure or an unclosed function). This is inappropriate to require in C for both technical and cultural reasons, so we're stuck with the option of either using explicit context pointers to fake a closure instead of nesting functions, or using a higher-level language that has the infrastructure needed to do it properly.
I'd like to quote something from the BDFL (Guido van Rossum):
This is because nested function definitions don't have access to the
local variables of the surrounding block -- only to the globals of the
containing module. This is done so that lookup of globals doesn't
have to walk a chain of dictionaries -- as in C, there are just two
nested scopes: locals and globals (and beyond this, built-ins).
Therefore, nested functions have only a limited use. This was a
deliberate decision, based upon experience with languages allowing
arbitraries nesting such as Pascal and both Algols -- code with too
many nested scopes is about as readable as code with too many GOTOs.
Emphasis is mine.
I believe he was referring to nested scope in Python (and as David points out in the comments, this was from 1993, and Python does support fully nested functions now) -- but I think the statement still applies.
The other part of it could have been closures.
If you have a function like this C-like code:
(*int()) foo() {
int x = 5;
int bar() {
x = x + 1;
return x;
}
return &bar;
}
If you use bar in a callback of some sort, what happens with x? This is well-defined in many newer, higher-level languages, but AFAIK there's no well-defined way to track that x in C -- does bar return 6 every time, or do successive calls to bar return incrementing values? That could have potentially added a whole new layer of complication to C's relatively simple definition.
See C FAQ 20.24 and the GCC manual for potential problems:
If you try to call the nested function
through its address after the
containing function has exited, all
hell will break loose. If you try to
call it after a containing scope level
has exited, and if it refers to some
of the variables that are no longer in
scope, you may be lucky, but it's not
wise to take the risk. If, however,
the nested function does not refer to
anything that has gone out of scope,
you should be safe.
This is not really more severe than some other problematic parts of the C standard, so I'd say the reasons are mostly historical (C99 isn't really that different from K&R C feature-wise).
There are some cases where nested functions with lexical scope might be useful (consider a recursive inner function which doesn't need extra stack space for the variables in the outer scope without the need for a static variable), but hopefully you can trust the compiler to correctly inline such functions, ie a solution with a seperate function will just be more verbose.
Nested functions are a very delicate thing. Will you make them closures? If not, then they have no advantage to regular functions, since they can't access any local variables. If they do, then what do you do to stack-allocated variables? You have to put them somewhere else so that if you call the nested function later, the variable is still there. This means they'll take memory, so you have to allocate room for them on the heap. With no GC, this means that the programmer is now in charge of cleaning up the functions. Etc... C# does this, but they have a GC, and it's a considerably newer language than C.
It also wouldn't be too hard to add members functions to structs but they are not in the standard either.
Features are not added to C standard based on soley whether or not they are easy to implement. It's a combination of many other factors including the point in time in which the standard was written and what was common / practical then.
One more reason: it is not at all clear that nested functions are valuable. Twenty-odd years ago I used to do large scale programming and maintenance in (VAX) Pascal. We had lots of old code that made heavy use of nested functions. At first, I thought this was way cool (compared to K&R C, which I had been working in before) and started doing it myself. After awhile, I decided it was a disaster, and stopped.
The problem was that a function could have a great many variables in scope, counting the variables of all the functions in which it was nested. (Some old code had ten levels of nesting; five was quite common, and until I changed my mind I coded a few of the latter myself.) Variables in the nesting stack could have the same names, so that "inner" function local variables could mask variables of the same name in more "outer" functions. A local variable of a function, that in C-like languages is totally private to it, could be modified by a call to a nested function. The set of possible combinations of this jazz was near infinite, and a nightmare to comprehend when reading code.
So, I started calling this programming construct "semi-global variables" instead of "nested functions", and telling other people working on the code that the only thing worse than a global variable was a semi-global variable, and please do not create any more. I would have banned it from the language, if I could. Sadly, there was no such option for the compiler...
ANSI C has been established for 20 years. Perhaps between 1983 and 1989 the committee may have discussed it in the light of the state of compiler technology at the time but if they did their reasoning is lost in dim and distant past.
I disagree with Dave Vandervies.
Defining a nested function is much better coding style than defining it in global scope, making it static and adding a comment saying "This is a helper function used only by myfunc()".
What if you needed a helper function for this helper function? Would you add a comment "This is a helper function for the first helper function used only by myfunc"? Where do you take the names from needed for all those functions without polluting the namespace completely?
How confusing can code be written?
But of course, there is the problem with how to deal with closuring, i.e. returning a pointer to a function that has access to variables defined in the function from which it is returned.
Either you don't allow references to local variables of the containing function in the contained one, and the nesting is just a scoping feature without much use, or you do. If you do, it is not a so simple feature: you have to be able to call a nested function from another one while accessing the correct data, and you also have to take into account recursive calls. That's not impossible -- techniques are well known for that and where well mastered when C was designed (Algol 60 had already the feature). But it complicates the run-time organization and the compiler and prevent a simple mapping to assembly language (a function pointer must carry on information about that; well there are alternatives such as the one gcc use). It was out of scope for the system implementation language C was designed to be.

Resources