This question : Is there a way to tell whether code is now being compiled as part of a PCH? lead me to thinking about this.
Is there a way, in perhaps only certain compilers, of getting a C/C++ compiler to dump out the defines that it's currently using?
Edit: I know this is technically a pre-processor issue but let's add that within the term compiler.
Yes. In GCC
g++ -E -dM <file>
I would bet it is possible in nearly all compilers.
Boost Wave (a preprocessor library that happens to include a command line driver) includes a tracing capability to trace macro expansions. It's probably a bit more than you're asking for though -- it doesn't just display the final result, but essentially every step of expanding a macro (even a very complex one).
The clang preprocessor is somewhat similar. It's also basically a library that happens to include a command line driver. The preprocessor defines a macro_iterator type and macro_begin/macro_end of that type, that will let you walk the preprocessor symbol table and do pretty much whatever you want with it (including printing out the symbols, of course).
Related
I found the following snippet of code .
#define f(g,g2) g##g2
main() {
int var12=100;
printf("%d",f(var,12));
}
I understand that this will translate f(var,12) into var12 .
My question is in the macro definition, why didn't they just write the following :
#define f(g,g2) gg2
why do we need ## to concatenate text, rather than concatenate it ourselves ?
If one writes gg2 the preprocessor will perceive that as a single token. The preprocessor cannot understand that that is the concatenation of g and g2.
#define f(g,g2) g##g2
My opinion is that this is poor unreadable code. It needs at least a comment (giving some motivation, explanation, etc...), and a short name like f is meaningless.
My question is in the macro definition, why didn't they just write the following :
#define f(g,g2) gg2
With such a macro definition, f(x,y) would still be expanded to the token gg2, even if the author wanted the expansion to be xy
Please take time to read e.g. the documentation of GNU cpp (and of your compiler, perhaps GCC) and later some C standard like n1570 or better.
Consider also designing your software by (in some cases) generating C code (inspired by GNU bison, or GNU m4, or GPP). Your build machinery (e.g. your Makefile for GNU make) would process that as you want. In some cases (e.g. programs running for hours of CPU time), you might consider doing some partial evaluation and generating specialized code at runtime (for example, with libgccjit or GNU lightning). Pitrat's book on Artificial Beings, the conscience of a conscious machine explains and arguments that idea in an entire book.
Don't forget to enable all warnings and debug info in your compiler (e.g. with GCC use gcc -Wall -Wextra -g) and learn to use a debugger (like GNU gdb).
On Linux systems I sometimes like to generate some (more or less temporary) C code at runtime (from some kind of abstract syntax tree), then compile that code as a plugin, and dlopen(3) that plugin then dlsym(3) inside it. For a stupid example, see my manydl.c program (to demonstrate that you can generate hundreds of thousands of C files and plugins in the same program). For serious examples, read books.
You might also read books about Common Lisp or about Rust; both have a much richer macro system than C provides.
I am trying to learn preprocessor tricks that I found not so easy (Can we have recursive macros?, Is there a way to use C++ preprocessor stringification on variadic macro arguments?, C++ preprocessor __VA_ARGS__ number of arguments, Variadic macro trick, ...). I know the -E option to see the result of the preprocessor whole pass but I would like to know, if options or means exist to see the result step by step. Indeed, sometimes it is difficult to follow what happens when a macro calls a macro that calls a macro ... with the mechanism of disabling context, painting blue ... In brief, I wonder if a sort of preprocessor debugger with breakpoints and other tools exists.
(Do not answer that this use of preprocessor directives is dangerous, ugly, horrible, not good practices in C, produces unreadable code ... I am aware of that and it is not the question).
Yes, this tool exists as a feature of Eclipse IDE. I think the default way to access the feature is to hover over a macro you want to see expanded (this will show the full expansion) and then press F2 on your keyboard (a popup appears that allows you to step through each expansion).
When I used this tool to learn more about macros it was very helpful. With just a little practice, you won't need it anymore.
In case anyone is confused about how to use this feature, I found a tutorial on the Eclipse documentation here.
This answer to another question is relevant.
When you do weird preprocessor tricks (which are legitimate) it is useful to ask the compiler to generate the preprocessed form (e.g. with gcc -C -E if using GCC) and look into that preprocessed form.
In practice, for a source file foo.c it makes (sometimes) sense to get its preprocessed form foo.i with gcc -C -E foo.c > foo.i and look into that foo.i.
Sometimes, it even makes sense to get that foo.i without line information. The trick here (removing line information contained in lines starting with #) would be to do:
gcc -C -E foo.c | grep -v '^#' > foo.i
Then you could indent foo.i and compile it, e.g. with gcc -Wall -c foo.i; you'll get error locations in the preprocessed file and you could understand how you got that and go back to your preprocessor macros (or their invocations).
Remember that the C preprocessor is mostly a textual transformation working at the file level. It is not possible to macro-expand a few lines in isolation (because prior lines might have played with #if combined with #define -perhaps in prior #include-d files- or preprocessor options such as -DNDEBUG passed to gcc or g++). On Linux see also feature_test_macros(7)
A known example of expansion which works differently when compiled with or without -DNDEBUG passed to the compiler is assert. The meaning of assert(i++ > 0) (a very wrong thing to code) depends on it and illustrates that macro-expansion cannot be done locally (and you might imagine some prior header having #define NDEBUG 1 even if of course it is poor taste).
Another example (very common actually) where the macro expansion is context dependent is any macro using __LINE__ or __COUNTER__
...
NB. You don't need Eclipse for all that, just a good enough source code editor (my preference is emacs but that is a matter of taste): for the preprocessing task you can use your compiler.
The only way to see what is wrong with your macro is to add the option which will keep the temporary files when compilation completes. For gcc it is -save-temps option. You can open the .i file and the the expanded macros.
IDE indexers (like Eclipse) will not help too much. They will not expand (as other answer states) the macros until the error occures.
I am working on a project where I need to inject code to C (or C++) files given some smart comments in the source. The code injected is provided by an external file. Does anyone know of any such attempts and can point me to examples - of course I need to preserve original line numbers with #line. My thinking is to replace the cpp with a script which first does this and then calls the system cpp.
Any suggestions will be appreciated
Thanks
Danny
Providing your modified cpp external program won't usually work, at least in recent GCC where the preprocessing is internal to the compiler (so is part of cc1 or cc1plus). Hence, there is no more any cpp program involved in most GCC compilations (but libcpp is an internal library of GCC).
If using mostly GCC, I would suggest to inject code with you own #pragmas (not comments!). You could add your own GCC plugin, or code your own MELT extension, for that purpose (since GCC plugins can add pragmas and builtins but cannot currently affect preprocessing).
As Ira Baxter commented, you could simply put some weird macro invocations and define these macros in separate files.
I don't exactly guess what precise kind of code injection you want.
Alternatively, you could generate your C or C++ code with your own generator (which could emit #line directives) and feed that to gcc
Some C compilers provide -D to define a macro on the command line and -U to undefine one (built-in or defined with -D).
I have used -D, but I'm curious about -U. What are the cases where it's useful in practice?
Here's one use case (I'm sure there are others):
Where your C compiler is being called from another application that generates the source for the C compiler, you won't easily get access to the source to modify it by hand (although most such compilers have a "keep C" option, editing generated code by hand is something to avoid). Usually the first compiler will have a bunch of options to set, and also let you pass further options to the C compiler yourself in an "options for the C compiler" argument (for instance, it might do this to let you control C compiler optimisation levels without assuming that the compiler is GCC). And sometimes the options for how to ultimately compile the generated code are controlled by macros built into the C output: since the output doesn't exist at the time you're entering command line options, -U and -D may be the only way to set those flags.
Real-world example: Gambit-C defaults to the option to output one massive C function instead of many separate ones, which (according to the docs) makes it easier for a C compiler to optimise the final code. It actually outputs the same C either way, toggling the behaviour with the __SINGLE_HOST macro. But compiling one huge function can take forever (or just fail) on an older machine, so there needs to be a way to turn this behaviour off. -U__SINGLE_HOST as one of the passed-through arguments to the C compiler can make it possible to actually compile Gambit projects on older computers while still enjoying some level of optimisation.
In this case the behaviour of __SINGLE_HOST could have been handled by the Gambit compiler instead, but while not strictly necessary, it gives more freedom to the person designing the first compiler. Which is always good.
The more generalised version of this answer would be that -U is useful any time your build system passes a bunch of -D arguments, and you don't want all of them; it can un-set default definitions after the system sets them.
I can only think of two cases where this can be useful:
If you have a #ifdef MY_MACRO or #ifndef MY_MACRO in your code, and MY_MACRO is defined (probably built-in, otherwise you could just delete it), and you want to compile without this macro (to change the behaviour of #ifdef)
Or if you want to redefine a macro with a different definition you "should" undefine it first (I write should, because the compiler complains if you doesn't, but everything works fine anyway)
In gcc, how can I check what C preprocessor definitions are in place during the compilation of a C program, in particular what standard or platform-specific macro definitions are defined?
Predefined macros depend on the standard and the way the compiler implements it.
For GCC: http://gcc.gnu.org/onlinedocs/cpp/Predefined-Macros.html
For Microsoft Visual Studio 8: http://msdn.microsoft.com/en-us/library/b0084kay(VS.80).aspx
This Wikipedia page http://en.wikipedia.org/wiki/C_preprocessor#Compiler-specific_predefined_macros lists how to dump at some of the predefined macros
A likely source of the predefined macros for a specific combination of compiler and platform is the Predef project at Sourceforge. They are attempting to maintain a catalog of all predefined macros in all C and C++ compilers on all platforms. In practice, they have coverage of a fair number of platforms for GCC, and a smattering of other compilers.
They achieved this through a combination of careful reading of documentation, as well as a shell script that figures out what macros are predefined the hard way: it tries them. My understanding is that it actually tries every string it can find in the executable image of the compiler and/or preprocessor to see if it has a predefined meaning.
They will happily add any info they don't have yet to their database.
A program may define a macro at one
point, remove that definition later,
and then provide a different
definition after that. Thus, at
different points in the program, a
macro may have different definitions,
or have no definition at all.