I am modifying some code for a Blackfin processor using VisualDSP++ v. 5.0. I have noticed that all of the header files in this project utilize the following convention:
#ifdef _LANGUAGE_C
/* All of the code associated with this header file. */
#endif
After searching through the documentation for this compiler I found the following:
_LANGUAGE_C - Always defined as 1.
So my question is two-fold.
What is the purpose of using #ifdef _LANGUAGE_C?
Wouldn't this just keep your code from running on a different compiler that may not have a macro defined for _LANGUAGE_C?
You have to look at how it is used in context, however I believe that in this case it is used in headers that are used in both C code and assembler where the assembler code utilises the C pre-processor. It allows C headers to be included in assembler code, and have the preprocessor remove the C code specific elements.
For example it is useful in assembler code to have the same #define ... constant macro values as the C code to avoid duplication and inconsistency, but a struct definition or function prototype for example would be meaningless.
I would expect perhaps:#if defined(_LANGUAGE_C) || defined(_LANGUAGE_C_PLUS_PLUS), but if the documentation says that it is always defined, perhaps it is defined for both C and C++ compilation in your case.
To answer your compound question, the answer is yes, for the most part. This is part of some pre processor directives that allow you to build for different environments using the same code. If you look through the windows driver kit for example, you see this convention utilized all over the place to ensure that the most efficient code is build depending on the target environment and compiler. I hope this is helpful. The could have potentially added code in there after the #ifdef with another for _LANGUAGE_CPP and put C++ specific code in there etc etc.
It is called a compilation constant, such compilation constant must be added to your build environment, you should check your build environment. It is to tell the compiler that the code it going to compile are to compiled with C specific checks and will generate outfut file(hex or bin) depending on that.
Related
I'm working on an application using both GLib and CUDA in C. It seems that there's a conflict when importing both glib.h and cuda_runtime.h for a .cu file.
7 months ago GLib made a change to avoid a conflict with pixman's macro. They added __ before and after the token noinline in gmacros.h: https://gitlab.gnome.org/GNOME/glib/-/merge_requests/2059
That should have worked, given that gcc claims:
You may optionally specify attribute names with __ preceding and following the name. This allows you to use them in header files without being concerned about a possible macro of the same name. For example, you may use the attribute name __noreturn__ instead of noreturn.
However, CUDA does use __ in its macros, and __noinline__ is one of them. They acknowledge the possible conflict, and add some compiler checks to ensure it won't conflict in regular c files, but it seems that in .cu files it still applies:
#if defined(__CUDACC__) || defined(__CUDA_ARCH__) || defined(__CUDA_LIBDEVICE__)
/* gcc allows users to define attributes with underscores,
e.g., __attribute__((__noinline__)).
Consider a non-CUDA source file (e.g. .cpp) that has the
above attribute specification, and includes this header file. In that case,
defining __noinline__ as below would cause a gcc compilation error.
Hence, only define __noinline__ when the code is being processed
by a CUDA compiler component.
*/
#define __noinline__ \
__attribute__((noinline))
I'm pretty new to CUDA development, and this is clearly a possible issue that they and gcc are aware of, so am I just missing a compiler flag or something? Or is this a genuine conflict that GLib would be left to solve?
Environment: glib 2.70.2, cuda 10.2.89, gcc 9.4.0
Edit: I've raised a GLib issue here
It might not be GLib's fault, but given the difference of opinion in the answers so far, I'll leave it to the devs there to decide whether to raise it with NVidia or not.
I've used nemequ's workaround for now and it compiles without complaint.
GCC's documentation states:
You may optionally specify attribute names with __ preceding and following the name. This allows you to use them in header files without being concerned about a possible macro of the same name. For example, you may use the attribute name __noreturn__ instead of noreturn.
Now, that's only assuming you avoid double-underscored names the compiler and library use; and they may use such names. So, if you're using NVCC - NVIDIA could declare "we use noinline and you can't use it".
... and indeed, this is basically the case: The macro is protected as follows:
#if defined(__CUDACC__) || defined(__CUDA_ARCH__) || defined(__CUDA_LIBDEVICE__)
#define __noinline__ __attribute__((noinline))
#endif /* __CUDACC__ || __CUDA_ARCH__ || __CUDA_LIBDEVICE__ */
__CUDA_ARCH__ - only defined for device-side code, where NVCC is the compiler (ignoring clang CUDA support here).
__CUDA_LIBDEVICE__ - Don't know where this is used, but you're certainly not building it, so you don't care about that.
__CUDACC__ defined when NVCC is compiling the code.
So in regular host-side code, including this header will not conflict with Glib's definitions.
Bottom line: NVIDIA is (basically) doing the right thing here and it shouldn't be a real problem.
GLib is clearly in the right here. They check for __GNUC__ (which is what compilers use to indicate compatibility with GNU C, AKA the GNU extensions to C and C++) prior to using __noinline__ exactly as the GNU documentation indicates it should be used: __attribute__((__noinline__)).
GNU C is clearly doing the right thing here, too. Compilers offering the GNU extensions (including GCC, clang, and many many others) are, well, compilers, so they are allowed to use the double-underscore prefixed identifiers. In fact, that's the whole idea behind them; it's a way for compilers to provide extensions without having to worry about conflicts to user code (which is not allowed to declare double-underscore prefixed identifiers).
At first glance, NVidia seems to be doing the right thing, too, but they're not. Assuming you consider them to be the compiler (which I think is correct), they are allowed to define double-underscore prefixed macros such as __noinline__. However, the problem is that NVidia also defines __GNUC__ (quite intentionally since they want to advertise support for GNU extensions), then proceeds to define __noinline__ in an incompatible way, breaking an API provided by GNU C.
Bottom line: NVidia is in the wrong here.
As for what to do about it, well that's a less interesting question but there are a few options. You could (and should) file an issue with NVidia to fix their compiler. In my experience they're pretty good about responding quickly but unlikely to get around to fixing the problem in a reasonable amount of time.
You could also send a patch to GLib to work around the problem by doing something like
#if defined(__CUDACC__)
__attribute__((noinline))
#elif defined(__GNUC__)
__attribute__((__noinline__))
#else
...
#endif
If you're in control of the code which includes glib, another option would be to do something like
#undef __noinline__
#include glib_or_file_which_includes_glib
#define __noinline__ __attribute__((noinline))
My advice would be to do all three, but especially the first one (file an issue with NVidia) and find a way to work around it in your code until NVidia fixes the problem.
I am trying to learn C and I have this C file that I want view the macros of. Is there a tool to view the macros of the compiled C file.
No. That's literally impossible.
The preprocessor is a textual replacement that happens before the main compile pass. There is no difference between using a macro and putting the code the macro expands to in its place.*
*Ignoring the debugger output. But even then you can do it if you know the right #pragma to tell it the file and line number.
They're always defined in the header file(s) that you've imported with #include, or that those files in turn #include.
This may involve a lot of digging. It may involve going into files that make no sense to you because they're not written for casual inspection.
Any macros of any importance are usually documented. They may use other more complex implementation-specific macros that you shouldn't concern yourself with ordinarily, but if you're curious how they work the source is all there.
That being said, this is only relevant if you have the source and more specifically a complete build environment. Once compiled all these definitions, like the source itself, do not appear in the executable and cannot be inferred directly from the executable, especially not a release build.
Unlike Java or C#, C compiles directly to machine code so there's no way to easily reverse that back to the source. There are "decompilers" that try, but they can only really guess as to the original source. VM-based languages like Java and C# only lightly compile the code, sot here are a lot of hints as to how that code was generated and reversing it is an easier process.
Some C compilers emit assembly language and allow snippets of assembly to be placed inline in the source code to be copied verbatim to the output, e.g. https://gcc.gnu.org/onlinedocs/gcc/Using-Assembly-Language-with-C.html
Some compilers for higher-level languages emit C, ranging from Nim which was to some extent designed for that, to Scheme which very definitely was not, and takes heroic effort to compile to efficient code that way.
Do any such compilers, similarly allow snippets of C to be placed inline in the source code, to be copied verbatim to the output?
I'm not sure I understand what you mean by "be copied verbatim to the output," but all C compilers (msvc, gcc, clang, etc...) have preprocessor directives that essentially allow snippets of code to be added to the source files for compilation. For example, the #include directive will pull in the contents the specified file to be included in compilation. An "effect" of this is that you can do weird things such as:
printf("My code: \n%s\n",
#include "/tmp/somefile.c"
);
Alternatively, creating macros with the #define directive allows you to supplant snippets of code by calling a macro name. This all happens at the preprocessor stage before turning into the compile "output."
Other languages, like c# with roslyn, allows runtime compilation of code. Of course, you can also implement the same within c by calling your compiler as via something like system() and then loading the resulting library with dlopen.
Edit:
Now that I come back and think about this question, I should also note that python is one of those C-targeting "compilers" (I guess technically a interpreter on top of the python runtime). Python let's you use native C compiled code with some either some py API code to export functions or directly with some dlopen-like helpers. Take a look at the inlinec module that does what I described above (call the compiler then load the compiled code). I suppose you should have the ability to do similar functionality with any language that can call c compiled code (c#, java, etc...).
So I'm writing portable embedded ansi C code that is attempting to support multiple compilers and hardware targets. Each compiler/hardware vendor has different math.h functions it supports. Some support only C90, some support a subset of C99, others a full set of C99.
I'm trying to find a way to check if a given function exists during preprocessor so that I can use a custom macro if it doesn't exist. Some vendors have extern functions in the math.h, some use #define to remap to some internal call. Is there a piece of code that can tell if it is #defined or an extern function? I can use #ifdef for the define, but what about an actual function call?
The usual solution is instead to look at macros defined by the preprocessor itself, or passed into the build process as -D definitions, which identify the compiler and platform you're running on, and use those plus your knowledge of what special assists each environment needs to configure your code.
I suppose you could write a series of test .c files, try compiling them, look at the error codes coming back, and use those to set appropriate -D flags... but I'm not convinced that would be any cleaner.
In gcc, how can I check what C preprocessor definitions are in place during the compilation of a C program, in particular what standard or platform-specific macro definitions are defined?
Predefined macros depend on the standard and the way the compiler implements it.
For GCC: http://gcc.gnu.org/onlinedocs/cpp/Predefined-Macros.html
For Microsoft Visual Studio 8: http://msdn.microsoft.com/en-us/library/b0084kay(VS.80).aspx
This Wikipedia page http://en.wikipedia.org/wiki/C_preprocessor#Compiler-specific_predefined_macros lists how to dump at some of the predefined macros
A likely source of the predefined macros for a specific combination of compiler and platform is the Predef project at Sourceforge. They are attempting to maintain a catalog of all predefined macros in all C and C++ compilers on all platforms. In practice, they have coverage of a fair number of platforms for GCC, and a smattering of other compilers.
They achieved this through a combination of careful reading of documentation, as well as a shell script that figures out what macros are predefined the hard way: it tries them. My understanding is that it actually tries every string it can find in the executable image of the compiler and/or preprocessor to see if it has a predefined meaning.
They will happily add any info they don't have yet to their database.
A program may define a macro at one
point, remove that definition later,
and then provide a different
definition after that. Thus, at
different points in the program, a
macro may have different definitions,
or have no definition at all.