How to figure out a specific kernel macro value while building a kernel module.
There are lots of macro options and I wish to know the value assigned to a "specific macro" while building and the line number where it is defined.
I read about this a quite while ago...and I know it is quite possible.
for eg:
make modules SUBDIRS=drivers/net/e1000/
Now, in e1000 there is a macro used, HAVE_VLAN_IN_HW.
while building the module, I wish to know where exactly it is coming from (the macro definition and its value).?
Most of us use a Linux cross reference to find where functions, macros and variables are declared.
I should have googled it with better keywords like preprocessing output :
Here you find all relevant info:
http://kernelnewbies.org/FAQ/KernelCrossCompilation
Yet another facet of kernel compilation is that it helps you to generate preprocessed files. This is extremely useful when you suspect something could be wrong with your macros. In 2.4 days, we could get the command line and add -c option and redirect the result of gcc preprocessor to a file. In 2.6, it is built into the kernel. Here is how.
Say, I want to generate the preprocessor output for kernel/dma.c,
#make kernel/dma.i
Done. open kernel/dma.i to see what preprocessor did to dma.c
This is available for a module (not a part of kernel) too.
Related
I have two boards, each with the same mcu as target. The difference is that the peripherals are not 100% the same (lets say they are by maybe 90%). So far my colleague has two macros and he either comments them or not so that #ifdef/#endif can be used to tell the preprocessor which includes to use and which to ignore.
I'm thinking of better ways to do this. I dont like the idea of people having to search for the correct line to comment each time they want the correct build for their hardware system, this should be automated and or better documented imho.
Best I came up with are multiple "build-sets" that would then by called "hardware-1" and "hardware-2" or something (of course more descriptive...). These build sets would then each have different "-I"-options to define the two macros my colleague used already before.
For cmake I found this thread:
Define preprocessor macro through CMake?
Is this the way to go or are there better ways that are more elegant? How would you solve this situation? The question maybe also goes into "What are the best practices to tackle this"
Thanks for your input
J
Best I came up with are multiple "build-sets" that would then by
called "hardware-1" and "hardware-2" or something (of course more
descriptive...). These build sets would then each have different
"-I"-options to define the two macros my colleague used already
before.
You mean -D, not -I, but yes, defining the macros via the compiler command line is one of the traditional approaches to this. How you might achieve that depends somewhat on your build system, but with a hand-rolled makefile, it is common to define make variables for target-specific flags, and to put put those, appropriately commented, at the top of the top-level makefile. Sometimes these are intended to be modified at build time, but sometimes there are just different makefiles, or else which set of flags to used is controlled by the target requested on the make command line.
For cmake I found [...]. Is this the way to go or are there better ways that are more elegant?
If you are using cmake already then yes, cmake's facilities for adding macro definitions to the compiler command line would be a great approach. If you are not using cmake then no, switching to a cmake-based build system would be way overkill for just solving the problem described. For systems where CMake will generate makefiles, it is basically a wrapper for what I already described.
I happen to be a fan of the Autotools. If you have an Autotools-based build system then there are different ways to set up this sort of thing, but if you don't, then setting up autotooling for just this purpose would be overkill. It is perhaps worth mentioning, however, that a standard Autotools approach would work by putting the definitions of the adjustable control macros in a header file, and having all the source files include that header. The Autotools would generate that header programmatically, but that's not essential -- you could set up such a header manually and update it as needed, and that would still solve the problem of knowing where to look for the macro definitions.
Normally one can specify preprocessor defines as part of the compilation command.
gcc -Wall -Darduino embedded.c
So assuming Linux/Make you could use
make clean arduino
or
make clean atmega2560
and simply have two targets named that in the make file.
Each one having a -darduino or -datmega2560 as part of the compile command.
If you are using some sort of IDE like MSVC, on the project properties page, under C/C++ you would find a Preprocessor area, and you can add one or the other as part of the preprocessor defines.
Preprocessor Definitions arduino;_DEBUG;_CONSOLE;%(PreprocessorDefinitions)
I'm writing a small operating system for microcontrollers in C (not C++, so I can't use templates). It makes heavy use of some gcc features, one of the most important being the removal of unused code. The OS doesn't load anything at runtime; the user's program and the OS source are compiled together to form a single binary.
This design allows gcc to include only the OS functions that the program actually uses. So if the program never uses i2c or USB, support for those won't be included in the binary.
The problem is when I want to include optional support for those features without introducing a dependency. For example, a debug console should provide functions to debug i2c if it's being used, but including the debug console shouldn't also pull in i2c if the program isn't using it.
The methods that come to mind to achieve this aren't ideal:
Have the user explicitly enable the modules they need (using #define), and use #if to only include support for them in the debug console if enabled. I don't like this method, because currently the user doesn't have to do this, and I'd prefer to keep it that way.
Have the modules register function pointers with the debug module at startup. This isn't ideal, because it adds some runtime overhead and means the debug code is split up over several files.
Do the same as above, but using weak symbols instead of pointers. But I'm still not sure how to actually accomplish this.
Do a compile-time test in the debug code, like:
if(i2cInit is used) {
debugShowi2cStatus();
}
The last method seems ideal, but is it possible?
This seems like an interesting problem. Here's an idea, although it's not perfect:
Two-pass compile.
What you can do is first, compile the program with a flag like FINDING_DEPENDENCIES=1. Surround all the dependency checks with #ifs for this (I'm assuming you're not as concerned about adding extra ifs there.)
Then, when the compile is done (without any optional features), use nm or similar to detect the usage of functions/features in the program (such as i2cInit), and format this information into a .h file.
#ifndef FINDING_DEPENDENCIES
#include "dependency_info.h"
#endif
Now the optional dependencies are known.
This still doesn't seem like a perfect solution, but ultimately, it's mostly a chicken-and-the-egg problem. When compiling, the compiler doesn't know what symbols are going to be gc'd out. You basically need to get this information from the linker stage and feed it back to the compilation stage.
Theoretically, this might not increase build times much, especially if you used a temp file for the generated h, and then only replaced it if it was different. You'd need to use different object dirs, though.
Also this might help (pre-strip, of course):
How can I view function names and parameters contained in an ELF file?
I am working on a project where I need to inject code to C (or C++) files given some smart comments in the source. The code injected is provided by an external file. Does anyone know of any such attempts and can point me to examples - of course I need to preserve original line numbers with #line. My thinking is to replace the cpp with a script which first does this and then calls the system cpp.
Any suggestions will be appreciated
Thanks
Danny
Providing your modified cpp external program won't usually work, at least in recent GCC where the preprocessing is internal to the compiler (so is part of cc1 or cc1plus). Hence, there is no more any cpp program involved in most GCC compilations (but libcpp is an internal library of GCC).
If using mostly GCC, I would suggest to inject code with you own #pragmas (not comments!). You could add your own GCC plugin, or code your own MELT extension, for that purpose (since GCC plugins can add pragmas and builtins but cannot currently affect preprocessing).
As Ira Baxter commented, you could simply put some weird macro invocations and define these macros in separate files.
I don't exactly guess what precise kind of code injection you want.
Alternatively, you could generate your C or C++ code with your own generator (which could emit #line directives) and feed that to gcc
I have two large framework libraries, whose header files are included in my project. Either one works flawlessly, but including both causes erratic behaviour (but no error message related to the macro).
I assume that they both #define a macro of the same name. What is the most efficient way to identify the problematic macro?
I assume that they both #define a macro of the same name.
That should generate at least a warning by the compiler (if they are in the same translation unit).
How do I identify redefined macros in C/C++?
As far as I know there is no straight-forward way.
Either one works flawlessly, but including both causes erratic behaviour
Can you please give us some details on the eratic behaviour? What actually happens? What makes you think it's the macro definitions?
If the header files are badly written and #undef SYMBOL ... #define SYMBOL something-else then you can't trace this. Simply #defining a macro twice should at least issue a warning. You'd have to look more closely at the 'erratic behavior'.
Try looking at the preprocessed output to determine what's different about it when you #include the header files and when you don't. With gcc and with MSVC, you'd compile with -E to print the preprocessor output to stdout. (It likely will be many thousands of lines, so you'll want to pipe it to a file.)
You should be able to run ctags over your source.
ctags can generate a tags file that, amongst other things, contains the names and locations of the macros in your C files.
You can control the types of symbols that ctags will store in your tags file through the use of the --c-kinds option.
eg.
ctags --c-kinds=+d -f tags --recurse ./your_source_directory/
You can then search for duplicates in the resultant tags file.
grep for #define ?
are you sure the problem isn't something else than a macro (for example pragmas for structur packing, global memory allocators, global namespace for class names, messing with locale ...)
Compile with all warnings on - they should tell you when a macro 'is already defined' (maybe you can modify code in order to fix this)
If (1) doesn't help then you should try to create function wrappers for each library. This way you avoid including the conflicting headers by including the wrapped headers, using the wrapped functions. This is laborious but it's sometimes the only way to make two libraries coexist in an application.
Basically solution (2) would make a separation between libraries. An example of such conflict is ACE with wxWidgets (2.8 version) when forced using precompiled libraries that are compiled with different options (one library Unicode the other ASCII).
Shouldn't be hard, right? Right?
I am currently trawling the OpenAFS codebase to find the header definition of pioctl. I've thrown everything I've got at it: checked ctags, grepped the source code for pioctl, etc. The closest I've got to a lead is the fact that there's a file pioctl_nt.h that contains the definition, except it's not actually what I want because none of the userspace code directly includes it, and it's Windows specific.
Now, I'm not expecting you to go and download the OpenAFS codebase and find the header file for me. I am curious, though: what are your techniques for finding the header file you need when everything else fails? What are the worst case scenarios that could cause a grep for pioctl in the codebase to not actually come up with anything that looks like a function definition?
I should also note that I have access to two independent userspace programs that have done it properly, so in theory I could do an O(n) search for the function. But none of the header files pop out to me, and n is large...
Edit: The immediate issue has been resolved: pioctl() is defined implicitly, as shown by this:
AFS.xs:2796: error: implicit declaration of function ‘pioctl’
If grep -r and ctags are failing, then it's probably being defined as the result of some nasty macro(s). You can try making the simplest possible file that calls pioctl() and compiles successfully, and then preprocessing it to see what happens:
gcc -E test.c -o test.i
grep pioctl -C10 test.i
There are compiler options to show the preprocessor output. Try those? In a horrible pinch where my head was completely empty of any possible definition the -E option (in most c compilers) does nothing but spew out the the preprocessed code.
Per requested information: Normally I just capture a compile of the file in question as it is output on the screen do a quick copy and paste and put the -E right after the compiler invocation. The result will spew preprocessor output to the screen so redirect it to a file. Look through that file as all of the macros and silly things are already taken care of.
Worst case scenarios:
K&R style prototypes
Macros are hiding the definition
Implicit Declaration (per your answer)
Have you considered using cscope (available from SourceForge)?
I use it on some fairly significant code sets (25,000+ files, ranging up to about 20,000 lines in a file) with good success. It takes a while to derive the file list (5-10 minutes) and longer (20-30 minutes) to build the cross-reference on an ancient Sun E450, but I find the results useful.
On an almost equally ancient Mac (dual 1GHz PPC 32-bit processors), cscope run on the OpenAFS (1.5.59) source code comes up with quite a lot of places where the function is declared, sometimes inline in code, sometimes in headers. It took a few minutes to scan the 4949 files, generating a 58 MB cscope.out file.
openafs-1.5.59/src/sys/sys_prototypes.h
openafs-1.5.59/src/aklog/aklog_main.c (along with comment "Why doesn't AFS provide these prototypes?")
openafs-1.5.59/src/sys/pioctl_nt.h
openafs-1.5.59/src/auth/ktc.c includes a define for PIOCTL
openafs-1.5.59/src/sys/pioctl_nt.c provides an implementation of it
openafs-1.5.59/src/sys/rmtsysc.c provides an implementation of it (and sometimes afs_pioctl() instead)
The rest of the 184 instances found seem to be uses of the function, or documentation references, or release notes, change logs, and the like.
The current working theory that we've decided on, after poking at the preprocessor and not finding anything either, is that OpenAFS is letting the compiler infer the prototype of the function, since it returns an integer and takes pointer, integer, pointer, integer as its parameters. I'll be dealing with this by merely defining it myself.
Edit: Excellent! I've found the smoking gun:
AFS.xs:2796: error: implicit declaration of function ‘pioctl’
While the original general question has been answered, if anyone arrives at this page wondering where to find a header file that defines pioctl:
In current releases of OpenAFS (1.6.7), a protoype for pioctl is defined in sys_prototypes.h. But that the time that this question was originally asked, that file did not exist, and there was no prototype for pioctl visible from outside the OpenAFS code tree.
However, most users of pioctl probably want, or are at least okay with using, lpioctl ("local" pioctl), which always issues a syscall on the local machine. There is a prototype for this in afssyscalls.h (and these days, also sys_prototypes.h).
The easiest option these days, though, is just to use libkopenafs. For that, include kopenafs.h, use the function k_pioctl, and link against -lkopenafs. That tends to be a much more convenient interface than trying to link with OpenAFS libsys and other stuff.
Doesn't it usually say in the man page synopsis?