I'm trying to build Gnu binutils with behaviour unlocked by defining the macro SYSV386_COMPAT 0 to vary the way in which it generates some FPU opcodes.
I can easily go into the header file and set the value manually, but how would I invoke the configure script in order to specify the equivalent of #define SYSV386_COMPAT 0 on the command line? I would prefer to specify something on the command line if at all possible (just because the feature is transitive and I don't think I should be hacking the source). Having said that, I have tried to read at least some of the FM but have had no luck with inserting an AC_DEFINE(SYSV386_COMPAT, 0) in either binutils/configure.in or gas/configure.in.
OK, so continued searching through other autoconf-tagged answers took me to the second comment for this answer.
I was able to use a similar syntax to invoke configure to get the result I was after:
./configure CPPFLAGS=-DSYSV386_COMPAT=0 --prefix=/path/to/my/deploy/dir
Thanks to William Pursell for his comment above pointing out the benefits of using CPPFLAGS instead of CFLAGS, and to anyone who was preparing to answer this question. If you have any further comments about the "best" way of solving this problem then please add to this thread for those coming to it later.
Best wishes,
Michael
Related
I have two boards, each with the same mcu as target. The difference is that the peripherals are not 100% the same (lets say they are by maybe 90%). So far my colleague has two macros and he either comments them or not so that #ifdef/#endif can be used to tell the preprocessor which includes to use and which to ignore.
I'm thinking of better ways to do this. I dont like the idea of people having to search for the correct line to comment each time they want the correct build for their hardware system, this should be automated and or better documented imho.
Best I came up with are multiple "build-sets" that would then by called "hardware-1" and "hardware-2" or something (of course more descriptive...). These build sets would then each have different "-I"-options to define the two macros my colleague used already before.
For cmake I found this thread:
Define preprocessor macro through CMake?
Is this the way to go or are there better ways that are more elegant? How would you solve this situation? The question maybe also goes into "What are the best practices to tackle this"
Thanks for your input
J
Best I came up with are multiple "build-sets" that would then by
called "hardware-1" and "hardware-2" or something (of course more
descriptive...). These build sets would then each have different
"-I"-options to define the two macros my colleague used already
before.
You mean -D, not -I, but yes, defining the macros via the compiler command line is one of the traditional approaches to this. How you might achieve that depends somewhat on your build system, but with a hand-rolled makefile, it is common to define make variables for target-specific flags, and to put put those, appropriately commented, at the top of the top-level makefile. Sometimes these are intended to be modified at build time, but sometimes there are just different makefiles, or else which set of flags to used is controlled by the target requested on the make command line.
For cmake I found [...]. Is this the way to go or are there better ways that are more elegant?
If you are using cmake already then yes, cmake's facilities for adding macro definitions to the compiler command line would be a great approach. If you are not using cmake then no, switching to a cmake-based build system would be way overkill for just solving the problem described. For systems where CMake will generate makefiles, it is basically a wrapper for what I already described.
I happen to be a fan of the Autotools. If you have an Autotools-based build system then there are different ways to set up this sort of thing, but if you don't, then setting up autotooling for just this purpose would be overkill. It is perhaps worth mentioning, however, that a standard Autotools approach would work by putting the definitions of the adjustable control macros in a header file, and having all the source files include that header. The Autotools would generate that header programmatically, but that's not essential -- you could set up such a header manually and update it as needed, and that would still solve the problem of knowing where to look for the macro definitions.
Normally one can specify preprocessor defines as part of the compilation command.
gcc -Wall -Darduino embedded.c
So assuming Linux/Make you could use
make clean arduino
or
make clean atmega2560
and simply have two targets named that in the make file.
Each one having a -darduino or -datmega2560 as part of the compile command.
If you are using some sort of IDE like MSVC, on the project properties page, under C/C++ you would find a Preprocessor area, and you can add one or the other as part of the preprocessor defines.
Preprocessor Definitions arduino;_DEBUG;_CONSOLE;%(PreprocessorDefinitions)
My use case is as follows. In the automated testing of one of my libraries I use the mktemp function in order to obtain a filename in order to create a temporary file. Xcode correctly complains about this as a security risk, but in this case I have no option (the API I must follow demands filenames) and I am willing to take the risk since the code is only the test code and not in an actual service. (Hence the security risk is not applicable.)
I suppose I could create my own version of a mktemp that is local to my testing, but I would prefer not to write things that have already been written.
So what I am wondering is if there is a way that I can tell the analyzer to stop complaining this instance of the problem? Note that this differs from the question asked in Is it possible to suppress Xcode 4 static analyzer warnings? in that this is not a false positive, and I do not want to suppress analyzing the file or all instances of this check. I just want to suppress this one instance. (i.e. something similar to cppcheck-suppress comment in Cppcheck)
#JonathanLeffler last comment was absolutely correct and I don't know how I missed it when I read the question I referenced. The following code segment does exactly what I want - it suppresses the analyzer warning in this instance of mktemp while leaving it active for all other instances that would use mktemp.
#if defined(__clang_analyzer__)
char* filename = "/tmp/somename";
#else
char* filename = mktemp("/tmp/prefixXXXX");
#endif
A while back I asked a question about this subject and "solved" it by using Cygwin instead with its XWin utility, but I've come back to this issue again since the Xwin utility does not use my GPU and creates a severe bottleneck in simulations as a result. MinGW/MSYS on the other hand DOES use my GPU for rendering, which is a huge help, but there are some rough areas that need smoothing over, specifically with readlink.
Basically, the src/makefile for rebound (https://github.com/hannorein/rebound) says this:
PREDEF+= -D$(shell basename `readlink gravity.c` '.c' | tr '[a-z]' '[A-Z]')
PREDEF+= -D$(shell basename `readlink boundaries.c` '.c' | tr '[a-z]' '[A-Z]')
PREDEF+= -D$(shell basename `readlink collisions.c` '.c' | tr '[a-z]' '[A-Z]')
If my understanding is correct, this is supposed to find which version of gravity, boundaries and collisions I specified, and adds that to PREDEFS so the compiler uses the right versions of gravity, boundaries and collisions. However, it does not seem to work in MSYS. What it ends up spitting out for predefs is this:
-DOPENGL -D.C -D.C -D.C
Obviously it did not get anything back from the code above. This results in a macronames must be identifiers error, of course. I can work around this by adding any of the special options in between readlink and the filename, like -f, for instance, but then it only spits out
-DOPENGL -DGRAVITY -DBOUNDARIES -DCOLLISIONS
Which is not right because it should have extra bits, like so:
-DOPENGL -DGRAVITY_DIRECT -DBOUNDARIES_OPEN -DCOLLISIONS_NONE
Now, if I don't want any special gravity, boundaries or collisions, the workaround is okay, but only because (I'm guessing) it defaults to those if there's nothing special specified after each macroname. But if I DO want something special, like the more efficient gravity tree code, or actual collisions, the shortened name resulting from the workaround will not help it find anything, and so it causes errors in compiling as certain functions it needed from the special files obviously are missing.
And so I'm pretty stuck at the moment. I would like very much to be able to use other codes than the defaults, but MSYS is acting funny with the readlink and not finding the right stuff. As I said, it worked fine in an X windows style compiler. I feel like there must be some library I'm missing or some hidden syntax disconnect I'm overlooking that needs to be accounted for between XWin and non-Xwin compiling, but I can't find anything.
Here's an example of the links it should be reading (at least I think this is what is being read, I'm still learning makefiles):
ln -fs gravity_tree.c ../../src/gravity.c
ln -fs boundaries_open.c ../../src/boundaries.c
ln -fs collisions_none.c ../../src/collisions.c
If anyone can tell me why this would work on an Xwin command line but not MSYS, I'd greatly appreciate it.
Why on earth do you expect readlink to work in MSYS? Where did you even get whatever readlink.exe is being invoked, (if that is what is being executed)? There is no readlink command in a standard MSYS installation. Perhaps you discovered it in MinGW.org's msys-coreutils-ext package? If this is the case, you should note the comment within the description of that package, (as seen via MinGW.org's mingw-get installer):
The msys-coreutils-bin subpackage contains those applications that were historically part of the standard MSYS installation. The associated msys-coreutils-ext subpackage contains the rest of the coreutils applications that have been (nominally) ported to MSYS -- usually these are less often used, and are not guaranteed to work: e.g. 'su.exe', 'chroot.exe' and 'mkfifo.exe' are known to be broken.
and, it seems that we may add readlink.exe to that list of "known to be broken" applications.
It may also be worth noting that readlink is not among the list of supporting tools, which a GNU Coding Standards conforming application is permitted to invoke from either its configure script, or its makefile. Thus, there is little incentive for the MinGW.org developers, (who maintain MSYS), to address the issue of making readlink.exe work, (although patches from an independent developer, with such an incentive, would be welcomed).
As a final qualification, and as one comment on the question notes, ln -s creates copies of files; it does not create symbolic links. How could it? MSYS itself dates from an era when windows didn't support symbolic links ... indeed, even today its support for them is flaky. At the time when MSYS was published, either copying the files, or creating NTFS hard links, was the best compromise MSYS could offer, in the situation where a script invoked ln -s. Consequently, it would become incumbent upon any developer submitting patches to make readlink.exe work, to also address the issue of updating ln.exe, such that it could create the symbolic links, (in an OS version dependent fashion), which readlink.exe would then read.
I'm sorry if this isn't the answer you hoped for, but unless someone devotes some effort into updating MSYS, so that it can make use of the (unreliable) symbolic link feature in more recent windows versions, then you need to find a different approach; current MSYS does not support symbolic links, even if the underlying OS now does.
I am new to the C programming language and gcc.
I am trying to decipher a rather complex C program. I would like to read a helpful listing file instead of the source file.
I am looking for a listing file created by the gcc compiler that contains:
the source code for all the includes
xref = cross reference listing
reference to where the variable is declared. For example, if the line contains i++;, then a note saying were i is declared.
I did a search for this, but gcc has so many options, I got lost.
If there is a better place to ask my question, please let me know.
Well, I AM old-school, and what the OP needs is pre-processor output, and yes it can be more edifying than an IDE. The preprocessor handles all of the # statements, like #include and #ifdef. So it shows you what eventually becomes the input to the compiler.
The g++ man page explains the 4 steps:
preprocessing, compilation, assembly and linking
and it goes on to explain that the sequence can be stopped at any point. Then under "Preprocessor Options", the way to control this is explained. As another post stated, -E will do the trick, but that is only part of the answer. For finer control use the -f family of options, such as -fdirectives-only. So probably what the OP wants is:
gcc -E -fdirectives-only -o MySrc.lst MySrc.cpp
For those using C++, I recommend using g++ directly:
g++ -E -fdirectives-only -o MySrc.lst MySrc.cpp
The desired listing is then in MySrc.lst
I understand your dilemma. Years ago, I tried to do exactly what you are doing, but eventually I gave up. Although one literally could do it, the result would drown the relevant code in so much code irrelevant to the problem at hand as to be useless.
I am afraid that you are going to have to learn how to read C code in the C way. If the code is complex, and you're a beginner, then -- for the moment -- you're probably in over your head.
If you want to try it anyway, then look at the names of the source's several *.h "header" files. Pick out three or four header files that seem to you likely to address the central part of the problem. Read these files first. Broaden your reading from there. This isn't easy, until you get the swing of it.
Good luck.
A different approach. If you use i.e. Vim then run cscope on the code.
For example I have Ctrl+\ as a cscope trigger. If I am in a function:
01 #define SOME_BLAH 33
02
03 void foo() {
04 printf("%d\n", SOME_BLAH); /* <- cursor on SOME_BLAH;
05 trigger + G jumps to line 1 */
06 }
07
08 void bar() {
09 foo(); /* <- Cursor on foo I hit trigger, G and I jump to line 3 */
10 }
Equally you can jump like this across files, into includes, list functions calling a specific function, list functions a function is calling, list files including a file, jump to where variables are defined etc. All within a couple of key strokes.
Every jump is added to a LIFO stack and Ctrlt brings one back to where I entered the last jump-to command.
Additionally add i.e. Taglist and you get a list on the side of the window with all defines, variables, functions etc. sorted in a list.
Another option is to compile the code with i.e. -ggdb and run it in an IDE like Code::Blocks, use DDD or the like – and step trough the code as it run as process. Can be quite educational.
To answer your question #1 it is possible to see the source code for all the includes by looking at the preprocessor output using gcc -E. However, that code will probably be more difficult to understand so it's probably not really what you are looking for, although I have found it useful in some instance for things that I have needed to do.
You don't. You use a program that does the listing separately. It's silly for compilers to have to know about printing too.
I recommend a2ps. For a cross-reference, look for cxref.
Shouldn't be hard, right? Right?
I am currently trawling the OpenAFS codebase to find the header definition of pioctl. I've thrown everything I've got at it: checked ctags, grepped the source code for pioctl, etc. The closest I've got to a lead is the fact that there's a file pioctl_nt.h that contains the definition, except it's not actually what I want because none of the userspace code directly includes it, and it's Windows specific.
Now, I'm not expecting you to go and download the OpenAFS codebase and find the header file for me. I am curious, though: what are your techniques for finding the header file you need when everything else fails? What are the worst case scenarios that could cause a grep for pioctl in the codebase to not actually come up with anything that looks like a function definition?
I should also note that I have access to two independent userspace programs that have done it properly, so in theory I could do an O(n) search for the function. But none of the header files pop out to me, and n is large...
Edit: The immediate issue has been resolved: pioctl() is defined implicitly, as shown by this:
AFS.xs:2796: error: implicit declaration of function ‘pioctl’
If grep -r and ctags are failing, then it's probably being defined as the result of some nasty macro(s). You can try making the simplest possible file that calls pioctl() and compiles successfully, and then preprocessing it to see what happens:
gcc -E test.c -o test.i
grep pioctl -C10 test.i
There are compiler options to show the preprocessor output. Try those? In a horrible pinch where my head was completely empty of any possible definition the -E option (in most c compilers) does nothing but spew out the the preprocessed code.
Per requested information: Normally I just capture a compile of the file in question as it is output on the screen do a quick copy and paste and put the -E right after the compiler invocation. The result will spew preprocessor output to the screen so redirect it to a file. Look through that file as all of the macros and silly things are already taken care of.
Worst case scenarios:
K&R style prototypes
Macros are hiding the definition
Implicit Declaration (per your answer)
Have you considered using cscope (available from SourceForge)?
I use it on some fairly significant code sets (25,000+ files, ranging up to about 20,000 lines in a file) with good success. It takes a while to derive the file list (5-10 minutes) and longer (20-30 minutes) to build the cross-reference on an ancient Sun E450, but I find the results useful.
On an almost equally ancient Mac (dual 1GHz PPC 32-bit processors), cscope run on the OpenAFS (1.5.59) source code comes up with quite a lot of places where the function is declared, sometimes inline in code, sometimes in headers. It took a few minutes to scan the 4949 files, generating a 58 MB cscope.out file.
openafs-1.5.59/src/sys/sys_prototypes.h
openafs-1.5.59/src/aklog/aklog_main.c (along with comment "Why doesn't AFS provide these prototypes?")
openafs-1.5.59/src/sys/pioctl_nt.h
openafs-1.5.59/src/auth/ktc.c includes a define for PIOCTL
openafs-1.5.59/src/sys/pioctl_nt.c provides an implementation of it
openafs-1.5.59/src/sys/rmtsysc.c provides an implementation of it (and sometimes afs_pioctl() instead)
The rest of the 184 instances found seem to be uses of the function, or documentation references, or release notes, change logs, and the like.
The current working theory that we've decided on, after poking at the preprocessor and not finding anything either, is that OpenAFS is letting the compiler infer the prototype of the function, since it returns an integer and takes pointer, integer, pointer, integer as its parameters. I'll be dealing with this by merely defining it myself.
Edit: Excellent! I've found the smoking gun:
AFS.xs:2796: error: implicit declaration of function ‘pioctl’
While the original general question has been answered, if anyone arrives at this page wondering where to find a header file that defines pioctl:
In current releases of OpenAFS (1.6.7), a protoype for pioctl is defined in sys_prototypes.h. But that the time that this question was originally asked, that file did not exist, and there was no prototype for pioctl visible from outside the OpenAFS code tree.
However, most users of pioctl probably want, or are at least okay with using, lpioctl ("local" pioctl), which always issues a syscall on the local machine. There is a prototype for this in afssyscalls.h (and these days, also sys_prototypes.h).
The easiest option these days, though, is just to use libkopenafs. For that, include kopenafs.h, use the function k_pioctl, and link against -lkopenafs. That tends to be a much more convenient interface than trying to link with OpenAFS libsys and other stuff.
Doesn't it usually say in the man page synopsis?