I am writing Java bindings for a C library, and therefore working with JNI. Oracle specifies, reasonably, that native libraries for use with Java should be compiled with multithread-aware compilers.
The JNI docs give the specific example that for gcc, this multithread-awareness requirement should be met by defining one of the macros _REENTRANT or _POSIX_C_SOURCE. That seems odd to me. _REENTRANT and _POSIX_C_SOURCE are feature-test macros. GCC and POSIX documentation describe their effects in terms of defining symbols and making declarations visible, just as I would expect for any feature-test macro.
If I do not need the additional symbols or functions, then do these macros in fact do anything useful for me? Does one or both cause gcc to generate different code than it otherwise would? Do they maybe cause my code's calls to standard library functions to be linked to different implementations? Or is Oracle just talking out of its nether regions?
Edit:
Additionally, it occurs to me that reentrancy is a separate consideration from threading. Non-reentrancy can be an issue even for single-threaded programs, so Oracle's suggestion that defining _REENTRANT makes gcc multithread-aware now seems even more dubious.
The Oracle recommendation was written for Solaris, not for Linux.
On Solaris, if you compiled a .so without _REENTRANT and ended up loaded by a multi-threaded application then very bad things could happen (e.g. random data corruption of libc internals). This was because without the define you ended up with unlocked variants of some routines by default.
This was the case when I first read this documentation, which was maybe 15 years ago, the mention of the -mt flag for the sun studio compiler was added after I last read this document in any detail.
This is no longer the case - You always get the same routine now whether or not you compile with the _REENTRANT flag; it's now only a feature macro, and not a behaviour macro.
Related
I'm working on an application using both GLib and CUDA in C. It seems that there's a conflict when importing both glib.h and cuda_runtime.h for a .cu file.
7 months ago GLib made a change to avoid a conflict with pixman's macro. They added __ before and after the token noinline in gmacros.h: https://gitlab.gnome.org/GNOME/glib/-/merge_requests/2059
That should have worked, given that gcc claims:
You may optionally specify attribute names with __ preceding and following the name. This allows you to use them in header files without being concerned about a possible macro of the same name. For example, you may use the attribute name __noreturn__ instead of noreturn.
However, CUDA does use __ in its macros, and __noinline__ is one of them. They acknowledge the possible conflict, and add some compiler checks to ensure it won't conflict in regular c files, but it seems that in .cu files it still applies:
#if defined(__CUDACC__) || defined(__CUDA_ARCH__) || defined(__CUDA_LIBDEVICE__)
/* gcc allows users to define attributes with underscores,
e.g., __attribute__((__noinline__)).
Consider a non-CUDA source file (e.g. .cpp) that has the
above attribute specification, and includes this header file. In that case,
defining __noinline__ as below would cause a gcc compilation error.
Hence, only define __noinline__ when the code is being processed
by a CUDA compiler component.
*/
#define __noinline__ \
__attribute__((noinline))
I'm pretty new to CUDA development, and this is clearly a possible issue that they and gcc are aware of, so am I just missing a compiler flag or something? Or is this a genuine conflict that GLib would be left to solve?
Environment: glib 2.70.2, cuda 10.2.89, gcc 9.4.0
Edit: I've raised a GLib issue here
It might not be GLib's fault, but given the difference of opinion in the answers so far, I'll leave it to the devs there to decide whether to raise it with NVidia or not.
I've used nemequ's workaround for now and it compiles without complaint.
GCC's documentation states:
You may optionally specify attribute names with __ preceding and following the name. This allows you to use them in header files without being concerned about a possible macro of the same name. For example, you may use the attribute name __noreturn__ instead of noreturn.
Now, that's only assuming you avoid double-underscored names the compiler and library use; and they may use such names. So, if you're using NVCC - NVIDIA could declare "we use noinline and you can't use it".
... and indeed, this is basically the case: The macro is protected as follows:
#if defined(__CUDACC__) || defined(__CUDA_ARCH__) || defined(__CUDA_LIBDEVICE__)
#define __noinline__ __attribute__((noinline))
#endif /* __CUDACC__ || __CUDA_ARCH__ || __CUDA_LIBDEVICE__ */
__CUDA_ARCH__ - only defined for device-side code, where NVCC is the compiler (ignoring clang CUDA support here).
__CUDA_LIBDEVICE__ - Don't know where this is used, but you're certainly not building it, so you don't care about that.
__CUDACC__ defined when NVCC is compiling the code.
So in regular host-side code, including this header will not conflict with Glib's definitions.
Bottom line: NVIDIA is (basically) doing the right thing here and it shouldn't be a real problem.
GLib is clearly in the right here. They check for __GNUC__ (which is what compilers use to indicate compatibility with GNU C, AKA the GNU extensions to C and C++) prior to using __noinline__ exactly as the GNU documentation indicates it should be used: __attribute__((__noinline__)).
GNU C is clearly doing the right thing here, too. Compilers offering the GNU extensions (including GCC, clang, and many many others) are, well, compilers, so they are allowed to use the double-underscore prefixed identifiers. In fact, that's the whole idea behind them; it's a way for compilers to provide extensions without having to worry about conflicts to user code (which is not allowed to declare double-underscore prefixed identifiers).
At first glance, NVidia seems to be doing the right thing, too, but they're not. Assuming you consider them to be the compiler (which I think is correct), they are allowed to define double-underscore prefixed macros such as __noinline__. However, the problem is that NVidia also defines __GNUC__ (quite intentionally since they want to advertise support for GNU extensions), then proceeds to define __noinline__ in an incompatible way, breaking an API provided by GNU C.
Bottom line: NVidia is in the wrong here.
As for what to do about it, well that's a less interesting question but there are a few options. You could (and should) file an issue with NVidia to fix their compiler. In my experience they're pretty good about responding quickly but unlikely to get around to fixing the problem in a reasonable amount of time.
You could also send a patch to GLib to work around the problem by doing something like
#if defined(__CUDACC__)
__attribute__((noinline))
#elif defined(__GNUC__)
__attribute__((__noinline__))
#else
...
#endif
If you're in control of the code which includes glib, another option would be to do something like
#undef __noinline__
#include glib_or_file_which_includes_glib
#define __noinline__ __attribute__((noinline))
My advice would be to do all three, but especially the first one (file an issue with NVidia) and find a way to work around it in your code until NVidia fixes the problem.
I was looking at the source code for gcc (out of curiosity), and I noticed a data structure that I've never seen in C before.
At line 80 and 129 (and many other places) in the parser, they seem to be using vectors.
80: vec<tree> incomplete_record_decls;
129: ridpointers = ggc_cleared_vec_alloc<tree> ((int) RID_MAX);
I've never encountered this data type in C, nor these: < >. Are they native to C?
Does anyone know what they are and how they are used?
Despite the .c filename, this code is not valid C; it is C++, using that language's template feature. If you inspect the gcc build process, you will find that this file is actually compiled with a C++ compiler.
https://gcc.gnu.org/codingconventions.html
The directories gcc, libcpp and fixincludes may use C++03. They may also use the long long type if the host C++ compiler supports it. These directories should use reasonably portable parts of C++03, so that it is possible to build GCC with C++ compilers other than GCC itself. If testing reveals that reasonably recent versions of non-GCC C++ compilers cannot compile GCC, then GCC code should be adjusted accordingly. (Avoiding unusual language constructs helps immensely.) Furthermore, these directories should also be compatible with C++11.
Keep in mind that although compilers will usually by default infer a source file's language from its filename, this default can always be overridden. It is entirely possible to have C++ code in a .c file, or C code in a .bas file for that matter; you just may have to tell the compiler some other way what language is in use.
I expect that gcc chose this file naming convention because this code was originally written in C and later converted to C++, and they found it too much of a pain to change all the filenames. It would mean a lot of work to update all the makefiles, etc. It may have been less of a pain to just change which compiler was used, and to explain the convention to all the developers. Of course, in general it is better programming practice to name your files in the standard way, but apparently the gcc developers felt it was not the best course of action in this case.
GCC has moved from C to C++ since GCC 4.8
GCC now uses C++ as its implementation language. This means that to build GCC from sources, you will need a C++ compiler that understands C++ 2003. For more details on the rationale and specific changes, please refer to the C++ conversion page.
GCC 4.8 Release Series - Changes, New Features, and Fixes
The work has actually begun long before that, with the creation of gcc-in-cxx branch. The developers first tried to compile the source code with a C++ compiler, so there weren't any name changes. I guess they didn't bother to rename the files later when merging the two branches and officially have only one C++ branch
You can read GCC's move to C++ for more historical information
One of my recent program highly depends on inlining a few "hot" functions for performance. These hot functions are part of an external .c file which I would prefer not to change.
Unfortunately, while Visual is pretty good at this exercise, gcc and clang are not. Apparently, due to the fact that the hot functions are within a different .c, they can't inline them.
This leaves me with 2 options :
Either include directly the relevant code into the target file. In practice, that means #include "perf.c" instead of #include "perf.h". Trivial change but it looks ugly. Clearly it works. It's just a little bit more complex to explain to the build chain that perf.c must be there but not be compiled nor linked.
Use -flto, for Link Time Optimisation. It looks cleaner, and is what Visual achieves by default.
The problem is, with -flto, gcc linking stage generates multiple warnings, which seem to be internal bugs (they refer to portion of code from within the standard libs, so I have little control over them). This is embarrassing when targeting a "zero warning" policy (even though the binary generated is perfectly fine).
As to clang, it just fails with -flto, due to packaging error (error loading plugin: LLVMgold.so) which is apparently very common accross multiple linux distros.
2 questions :
Is there a way to turn off these warning messages when using -flto on gcc ?
Which of the 2 methods described above methods seems the better one, given pro and con ?
Optional : is there another solution ?
According to your comment you have to suport gcc 4.4. As LTO started with gcc 4.5 (with all caution about early versions), the answer should be clearly. no -flto.
So, #include the code with all due caution, of course.
Update:
The file-extension should not be .c, though, but e.g. .inc (.i is also a bad idea). Even better: .h and change the functions to static inline. That still might not guarantee inlining, but that's the same as for all functions and it maintains the appearance of a clean header (although a longer inline function still is bad style).
Before doing all this, I'd properly profile, if the code really has a problem. One should concentrate on writing readable and maintainable code in the first place.
I would like to use GCC kind of as a JIT compiler, where I just compile short snippets of code every now and then. While I could of course fork a GCC process for each function I want to compile, I find that GCC's startup overhead is too large for that (it seems to be about 50 ms on my computer, which would make it take 50 seconds to compile 1000 functions). Therefore, I'm wondering if it's possible to run GCC as a daemon or use it as a library or something similar, so that I can just submit a function for compilation without the startup overhead.
In case you're wondering, the reason I'm not considering using an actual JIT library is because I haven't found one that supports all the features I want, which include at least good knowledge of the ABI so that it can handle struct arguments (lacking in GNU Lightning), nested functions with closure (lacking in libjit) and having a C-only interface (lacking in LLVM; I also think LLVM lacks nested functions).
And no, I don't think I can batch functions together for compilation; half the point is that I'd like to compile them only once they're actually called for the first time.
I've noticed libgccjit, but from what I can tell, it seems very experimental.
My answer is "No (you can't run GCC as a daemon process, or use it as a library)", assuming you are trying to use the standard GCC compiler code. I see at least two problems:
The C compiler deals in complete translation units, and once it has finished reading the source, compiles it and exits. You'd have to rejig the code (the compiler driver program) to stick around after reading each file. Since it runs multiple sub-processes, I'm not sure that you'll save all that much time with it, anyway.
You won't be able to call the functions you create as if they were normal statically compiled and linked functions. At the least you will have to load them (using dlopen() and its kin, or writing code to do the mapping yourself) and then call them via the function pointer.
The first objection deals with the direct question; the second addresses a question raised in the comments.
I'm late to the party, but others may find this useful.
There exists a REPL (read–eval–print loop) for c++ called Cling, which is based on the Clang compiler. A big part of what it does is JIT for c & c++. As such you may be able to use Cling to get what you want done.
The even better news is that Cling is undergoing an attempt to upstream a lot of the Cling infrastructure into Clang and LLVM.
#acorn pointed out that you'd ruled out LLVM and co. for lack of a c API, but Clang itself does have one which is the only one they guarantee stability for: https://clang.llvm.org/doxygen/group__CINDEX.html
If I include <stdlib.h> or <stdio.h> in a C program, I don't have to link these when compiling, but I do have to link to <math.h>, using -lm with GCC, for example:
gcc test.c -o test -lm
What is the reason for this? Why do I have to explicitly link the math library, but not the other libraries?
The functions in stdlib.h and stdio.h have implementations in libc.so (or libc.a for static linking), which is linked into your executable by default (as if -lc were specified). GCC can be instructed to avoid this automatic link with the -nostdlib or -nodefaultlibs options.
The math functions in math.h have implementations in libm.so (or libm.a for static linking), and libm is not linked in by default. There are historical reasons for this libm/libc split, none of them very convincing.
Interestingly, the C++ runtime libstdc++ requires libm, so if you compile a C++ program with GCC (g++), you will automatically get libm linked in.
Remember that C is an old language and that FPUs are a relatively recent phenomenon. I first saw C on 8-bit processors where it was a lot of work to do even 32-bit integer arithmetic. Many of these implementations didn't even have a floating point math library available!
Even on the first 68000 machines (Mac, Atari ST, Amiga), floating point coprocessors were often expensive add-ons.
To do all that floating point math, you needed a pretty sizable library. And the math was going to be slow. So you rarely used floats. You tried to do everything with integers or scaled integers. When you had to include math.h, you gritted your teeth. Often, you'd write your own approximations and lookup tables to avoid it.
Trade-offs existed for a long time. Sometimes there were competing math packages called "fastmath" or such. What's the best solution for math? Really accurate but slow stuff? Inaccurate but fast? Big tables for trig functions? It wasn't until coprocessors were guaranteed to be in the computer that most implementations became obvious. I imagine that there's some programmer out there somewhere right now, working on an embedded chip, trying to decide whether to bring in the math library to handle some math problem.
That's why math wasn't standard. Many or maybe most programs didn't use a single float. If FPUs had always been around and floats and doubles were always cheap to operate on, no doubt there would have been a "stdmath".
Because of ridiculous historical practice that nobody is willing to fix. Consolidating all of the functions required by C and POSIX into a single library file would not only avoid this question getting asked over and over, but would also save a significant amount of time and memory when dynamic linking, since each .so file linked requires the filesystem operations to locate and find it, and a few pages for its static variables, relocations, etc.
An implementation where all functions are in one library and the -lm, -lpthread, -lrt, etc. options are all no-ops (or link to empty .a files) is perfectly POSIX conformant and certainly preferable.
Note: I'm talking about POSIX because C itself does not specify anything about how the compiler is invoked. Thus you can just treat gcc -std=c99 -lm as the implementation-specific way the compiler must be invoked for conformant behavior.
Because time() and some other functions are builtin defined in the C library (libc) itself and GCC always links to libc unless you use the -ffreestanding compile option. However math functions live in libm which is not implicitly linked by gcc.
An explanation is given here:
So if your program is using math functions and including math.h, then you need to explicitly link the math library by passing the -lm flag. The reason for this particular separation is that mathematicians are very picky about the way their math is being computed and they may want to use their own implementation of the math functions instead of the standard implementation. If the math functions were lumped into libc.a it wouldn't be possible to do that.
[Edit]
I'm not sure I agree with this, though. If you have a library which provides, say, sqrt(), and you pass it before the standard library, a Unix linker will take your version, right?
There's a thorough discussion of linking to external libraries in An Introduction to GCC - Linking with external libraries. If a library is a member of the standard libraries (like stdio), then you don't need to specify to the compiler (really the linker) to link them.
After reading some of the other answers and comments, I think the libc.a reference and the libm reference that it links to both have a lot to say about why the two are separate.
Note that many of the functions in 'libm.a' (the math library) are defined in 'math.h' but are not present in libc.a. Some are, which may get confusing, but the rule of thumb is this--the C library contains those functions that ANSI dictates must exist, so that you don't need the -lm if you only use ANSI functions. In contrast, `libm.a' contains more functions and supports additional functionality such as the matherr call-back and compliance to several alternative standards of behavior in case of FP errors. See section libm, for more details.
As ephemient said, the C library libc is linked by default and this library contains the implementations of stdlib.h, stdio.h and several other standard header files. Just to add to it, according to "An Introduction to GCC" the linker command for a basic "Hello World" program in C is as below:
ld -dynamic-linker /lib/ld-linux.so.2 /usr/lib/crt1.o
/usr/lib/crti.o /usr/libgcc-lib /i686/3.3.1/crtbegin.o
-L/usr/lib/gcc-lib/i686/3.3.1 hello.o -lgcc -lgcc_eh -lc
-lgcc -lgcc_eh /usr/lib/gcc-lib/i686/3.3.1/crtend.o /usr/lib/crtn.o
Notice the option -lc in the third line that links the C library.
If I put stdlib.h or stdio.h, I don't have to link those but I have to link when I compile:
stdlib.h, stdio.h are the header files. You include them for your convenience. They only forecast what symbols will become available if you link in the proper library. The implementations are in the library files, that's where the functions really live.
Including math.h is only the first step to gaining access to all the math functions.
Also, you don't have to link against libm if you don't use it's functions, even if you do a #include <math.h> which is only an informational step for you, for the compiler about the symbols.
stdlib.h, stdio.h refer to functions available in libc, which happens to be always linked in so that the user doesn't have to do it himself.
It's a bug. You shouldn't have to explicitly specify -lm any more. Perhaps if enough people complain about it, it will be fixed. (I don't seriously believe this, as the maintainers who are perpetuating the distinction are evidently very stubborn, but I can hope.)
I think it's kind of arbitrary. You have to draw a line somewhere (which libraries are default and which need to be specified).
It gives you the opportunity to replace it with a different one that has the same functions, but I don't think it's very common to do so.
I think GCC does this to maintain backwards compatibility with the original cc executable. My guess for why cc does this is because of build time -- cc was written for machines with far less power than we have now. A lot of programs don't have any floating-point math, and they probably took every library that wasn't commonly used out of the default. I'm guessing that the build time of the Unix OS and the tools that go along with it were the driving force.
I would guess that it is a way to make applications which don't use it at all perform slightly better. Here's my thinking on this.
x86 OSes (and I imagine others) need to store FPU state on context switch. However, most OSes only bother to save/restore this state after the app attempts to use the FPU for the first time.
In addition to this, there is probably some basic code in the math library which will set the FPU to a sane base state when the library is loaded.
So, if you don't link in any math code at all, none of this will happen, therefore the OS doesn't have to save/restore any FPU state at all, making context switches slightly more efficient.
Just a guess though.
The same base premise still applies to non-FPU cases (the premise being that it was to make apps which didn't make use libm perform slightly better).
For example, if there is a soft-FPU which was likely in the early days of C. Then having libm separate could prevent a lot of large (and slow if it was used) code from unnecessarily being linked in.
In addition, if there is only static linking available, then a similar argument applies that it would keep executable sizes and compile times down.
stdio is part of the standard C library which, by default, GCC will link against.
The math function implementations are in a separate libm file that is not linked to by default, so you have to specify it -lm. By the way, there is no relation between those header files and library files.
All libraries like stdio.h and stdlib.h have their implementation in libc.so or libc.a and get linked by the linker by default. The libraries for libc.so are automatically linked while compiling and is included in the executable file.
But math.h has its implementations in libm.so or libm.a which is separate from libc.so. It does not get linked by default and you have to manually link it while compiling your program in GCC by using the -lm flag.
The GNU GCC team designed it to be separate from the other header files, while the other header files get linked by default, but math.h file doesn't.
Here read the item number 14.3, you could read it all if you wish:
Reason why math.h is needs to be linked
Look at this article: Why do we have to link math.h in GCC?
Have a look at the usage:
Using the library
Note that -lm may not always need to be specified even if you use some C math functions.
For example, the following simple program:
#include <stdio.h>
#include <math.h>
int main() {
printf("output: %f\n", sqrt(2.0));
return 0;
}
can be compiled and run successfully with the following command:
gcc test.c -o test
It was tested on GCC 7.5.0 (on Ubuntu 16.04) and GCC 4.8.0 (on CentOS 7).
The post here gives some explanations:
The math functions you call are implemented by compiler built-in functions
See also:
Other Built-in Functions Provided by GCC
How to get the gcc compiler to not optimize a standard library function call like printf?