This question already has answers here:
Closed 11 years ago.
Possible Duplicate:
#pragma - help understanding
I saw the pragma many times,but always confused, anyone knows what it does?Is it windows only?
It's used to replace the following preprocessor code:
#ifndef _MYHEADER_H_
#define _MYHEADER_H_
...
#endif
A good convention is adding both to support legacy compilers (which is rare though):
#pragma once
#ifndef _MYHEADER_H_
#define _MYHEADER_H_
...
#endif
So if #pragma once fails the old method will still work.
2023 update
I see some people in the comment section advocate for using guards instead of #pragma once.
This makes little to no sense in 2023 and beyond unless you are targeting some special compiler that you know does not support #pragma once.
Today's best practice is to use only #pragma once and don't bother with guards at all. Reasons being
All major compilers been supporting this forever and that is not
going to change.
Using #pragma allows the compiler to use its internal caches which is of course faster than using the pre-processor which will always include the contents of your file just to later stumble on your guards and dismiss the whole thing.
It's a lot shorter and easier to add/maintain
In the C and C++ programming
languages, #pragma once is a
non-standard but widely supported
preprocessor directive designed to
cause the current source file to be
included only once in a single
compilation. Thus, #pragma once serves
the same purpose as #include guards,
but with several advantages,
including: less code, avoiding name
clashes, and improved compile speed.
See the Wikipedia article for further details.
Generally, the #pragma directives are intended for implementing compiler-specific preprocessor instructions. They are not standardized, so you shouldn't rely on them too heavily.
In this case, #pragma once's purpose is to replace the include guards that you use in header files to avoid multiple inclusion. It works a little faster on the compilers that support it, so it may reduce the compilation time on large projects with a lot of header files that are #include'ed frequently.
pragma is a directive to the preprocessor. It is usually used to provide some additional control during the compilation. For example do not include the same header file code. There is a lot of different directives. The answer depends on what follows the pragma word.
Related
I'm working on an application using both GLib and CUDA in C. It seems that there's a conflict when importing both glib.h and cuda_runtime.h for a .cu file.
7 months ago GLib made a change to avoid a conflict with pixman's macro. They added __ before and after the token noinline in gmacros.h: https://gitlab.gnome.org/GNOME/glib/-/merge_requests/2059
That should have worked, given that gcc claims:
You may optionally specify attribute names with __ preceding and following the name. This allows you to use them in header files without being concerned about a possible macro of the same name. For example, you may use the attribute name __noreturn__ instead of noreturn.
However, CUDA does use __ in its macros, and __noinline__ is one of them. They acknowledge the possible conflict, and add some compiler checks to ensure it won't conflict in regular c files, but it seems that in .cu files it still applies:
#if defined(__CUDACC__) || defined(__CUDA_ARCH__) || defined(__CUDA_LIBDEVICE__)
/* gcc allows users to define attributes with underscores,
e.g., __attribute__((__noinline__)).
Consider a non-CUDA source file (e.g. .cpp) that has the
above attribute specification, and includes this header file. In that case,
defining __noinline__ as below would cause a gcc compilation error.
Hence, only define __noinline__ when the code is being processed
by a CUDA compiler component.
*/
#define __noinline__ \
__attribute__((noinline))
I'm pretty new to CUDA development, and this is clearly a possible issue that they and gcc are aware of, so am I just missing a compiler flag or something? Or is this a genuine conflict that GLib would be left to solve?
Environment: glib 2.70.2, cuda 10.2.89, gcc 9.4.0
Edit: I've raised a GLib issue here
It might not be GLib's fault, but given the difference of opinion in the answers so far, I'll leave it to the devs there to decide whether to raise it with NVidia or not.
I've used nemequ's workaround for now and it compiles without complaint.
GCC's documentation states:
You may optionally specify attribute names with __ preceding and following the name. This allows you to use them in header files without being concerned about a possible macro of the same name. For example, you may use the attribute name __noreturn__ instead of noreturn.
Now, that's only assuming you avoid double-underscored names the compiler and library use; and they may use such names. So, if you're using NVCC - NVIDIA could declare "we use noinline and you can't use it".
... and indeed, this is basically the case: The macro is protected as follows:
#if defined(__CUDACC__) || defined(__CUDA_ARCH__) || defined(__CUDA_LIBDEVICE__)
#define __noinline__ __attribute__((noinline))
#endif /* __CUDACC__ || __CUDA_ARCH__ || __CUDA_LIBDEVICE__ */
__CUDA_ARCH__ - only defined for device-side code, where NVCC is the compiler (ignoring clang CUDA support here).
__CUDA_LIBDEVICE__ - Don't know where this is used, but you're certainly not building it, so you don't care about that.
__CUDACC__ defined when NVCC is compiling the code.
So in regular host-side code, including this header will not conflict with Glib's definitions.
Bottom line: NVIDIA is (basically) doing the right thing here and it shouldn't be a real problem.
GLib is clearly in the right here. They check for __GNUC__ (which is what compilers use to indicate compatibility with GNU C, AKA the GNU extensions to C and C++) prior to using __noinline__ exactly as the GNU documentation indicates it should be used: __attribute__((__noinline__)).
GNU C is clearly doing the right thing here, too. Compilers offering the GNU extensions (including GCC, clang, and many many others) are, well, compilers, so they are allowed to use the double-underscore prefixed identifiers. In fact, that's the whole idea behind them; it's a way for compilers to provide extensions without having to worry about conflicts to user code (which is not allowed to declare double-underscore prefixed identifiers).
At first glance, NVidia seems to be doing the right thing, too, but they're not. Assuming you consider them to be the compiler (which I think is correct), they are allowed to define double-underscore prefixed macros such as __noinline__. However, the problem is that NVidia also defines __GNUC__ (quite intentionally since they want to advertise support for GNU extensions), then proceeds to define __noinline__ in an incompatible way, breaking an API provided by GNU C.
Bottom line: NVidia is in the wrong here.
As for what to do about it, well that's a less interesting question but there are a few options. You could (and should) file an issue with NVidia to fix their compiler. In my experience they're pretty good about responding quickly but unlikely to get around to fixing the problem in a reasonable amount of time.
You could also send a patch to GLib to work around the problem by doing something like
#if defined(__CUDACC__)
__attribute__((noinline))
#elif defined(__GNUC__)
__attribute__((__noinline__))
#else
...
#endif
If you're in control of the code which includes glib, another option would be to do something like
#undef __noinline__
#include glib_or_file_which_includes_glib
#define __noinline__ __attribute__((noinline))
My advice would be to do all three, but especially the first one (file an issue with NVidia) and find a way to work around it in your code until NVidia fixes the problem.
I've read up a bit on preprocessor directives and I've seen #import being used a few times in C programs. I'm not sure what the difference is between them, some sites have said that #include is only used for header files and #import is used more in Java and is deprecated in C.
If that's the case, why do some programs still use #import and how exactly is it different from #include? Also, I've used #import in a few of my C programs and it seems to work fine and do the same thing as #include.
This is well-explained in the Gnu CPP (C preprocessor) manual, although the behaviour is the same in clang (and possibly other C compilers, but not MSVC):
The problem. Summary: You don't usually want to include the same header twice into a single translation unit, because that can lead to duplicate declarations, which is an error. However, since included files may themselves want to include other files, it is hard to avoid.
Some non-standard solutions (including #import). Summary: #import in the including file and #pragma once in the included file both prevent duplicate inclusion. But #pragma once is a much better solution, because the includer shouldn't need to know whether duplicate inclusion is acceptable.
The linked document calls #import a "deprecated extension", which is a slightly odd way of describing a feature which was never part of any standard C version. But it's not totally meaningless: many preprocessor implementations do allow #import (which is a feature of Objective-C), so it is a common extension. Calling it deprecated is a way of saying that the extension will never be part of any C standard, regardless of how widespread implementations are.
If you want to use an extension, use #pragma once; that also might not be present in a future standard, but changing it for a given header file will only require a change in one place instead of in every file which includes the header. C++ and even C are likely at some point to develop some kind of module feature which will allow inclusion guards to finally be replaced.
As mentioned in comments, #import is not standard and can mean different things for different compilers.
With Microsoft's compiler, for example, #import can automatically generate and include a header file at compilation time.
Simple. It can be the same; but some compilers handle #import differently, like the Microsoft compiler: it will automatically include the specified file at compilation time.
I like to keep my files clean, so I prefer to take out includes I don't need. Lately I've been just commenting the includes out and seeing if it compiles without warnings (-Wall -Wextra -pedantic, minus a couple very specific ones). I figure if it compiles without warnings I didn't need it.
Is this actually a safe way to check if an include is needed or can it introduce UB or other problems? Are there any specific warnings I need to be sure are enabled to catch potential problems?
n.b. I'm actually using Objective C and clang, so anything specific to those is appreciated, but given the flexibility of Objective C I think if there's any trouble it will be a general C thing. Certainly any problems in C will affect Objective C.
In principle, yes.
The exception would be if two headers interact in some hidden way. Say, if you:
include two different headers which define the same symbol differently,
both definitions are syntactically valid and well-typed,
but one definition is good, the other breaks your program at run-time.
Hopefully, your header files are not structured like that. It's somewhat unlikely, though not inconceivable.
I'd be more comfortable doing this if I had good (unit) tests.
Usually just commenting out the inclusion of the header is safe, meaning: if the header is needed then there will be compiler errors when you remove it, and (usually) if the header is not needed, the code will still compile fine.
This should not be done without inspecting the header to see what it adds though, as there is the (not exactly typical) possibility that a header only provides optional #define's (or #undef's) which will alter, but not break, the way a program is compiled.
The only way to be sure is to build your code without the header (if it's able to build in the first place) and run a proper regimen of testing to ensure its behavior has not changed.
No. Apart from the reasons already mentioned in other answers, it's possible that the header is needed and another header includes it indirectly. If you remove the #include, you won't see an error but there may be errors on other platforms.
In general, no. It is easy to introduce silent changes.
Suppose header.h defines some macros like
#define WITH_FEATURE_FOO
The C file including header.h tests the macro
#ifdef WITH_FEATURE_FOO
do_this();
#else
do_that();
#endif
Your files compile cleanly and with all warnings enabled with or without the inclusion of header.h, but the result behaves differently. The only way to get a definitive answer is to analyze which identifiers a header defines/declares and see if at least one of them appears in the preprocessed C file.
One tool that does this is FlexeLint from Gimpel. I don't get paid for saying this, even though they should :-) If you want to avoid shelling out big bucks, an approach I have been taking is compiling a C file to an object file with and without the header, if both succeed check for identical object files. If they are the same you don't need the header
(but watch our for include directives wrapped in #ifdefs that are enabled by a -DWITH_FEATURE_FOO option).
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 3 years ago.
Improve this question
Is it remotely sane to apply the C preprocessor to the same codebase multiple times (specifically, twice in sequence?)
For instance, having declarations such as the following:
##define DECLARE(FILE) # define DECLARATIONS \
# include FILE \
# undef DECLARATIONS
Have you ever seen such an idiom before? If so, what codebase? Can you link it? What sort of patterns would be followed to compile a project doing something like this? Can the CPP as it stands be made to do this, or do I need to write a meta-preprocessor to “hide” the single-hash declarations while processing the double-hash declarations, and so on?
I think when you need multiple CPP passes, you might want to consider m4 or some other sophisticated macro system/code generator. I think it will be hard to do what you want, and since you are going to be changing your build process for this anyway, look at other templating or macro systems.
Oh wow, why would you want to do this? I am sure GCC could be coerced into doing something like this with some clever make tricks (use the -E flag for GCC) but I can't imagine anyone being able to maintain it later.
Google threw this up, so here's a four-years-late use case for multiple (pre)compilation passes.
The largest benefit to multiple-pass compilation that I can see comes from optionally preprocessing the file. Specifically, when one would like to see the preprocessed source without including the very large standard headers at the top. E.g.,
#ifdef PRECOMPILATION
#ifdef TMPINCLUDE
#error "This stunt assumes TMPINCLUDE isn't already defined"
#endif
#define TMPINCLUDE #include <stdlib.h>
TMPINCLUDE
#undef TMPINCLUDE
#else
#include <stdlib.h>
#endif
This will compile as normal in the absence of PRECOMPILATION, but if compiled as gcc -E -P -DPRECOMPILATION or similar, will translate into a source file containing all your code, post expansion, and the #include statement at the top. So it's still valid code and can also be compiled from the already-preprocessed file.
Macros are unpopular in the C and C++ world. I would like to release a plausibly useful library to the wider world, but it's very heavily based on macros to reduce code duplication. Using an either-one-or-two pass compilation model means I can use the library directly, macros and all, in my own work, but can also release a sanitised version which only uses the preprocessor to include standard libraries.
Whether that is remotely sane or not is rather subjective.
A programmer I respect said that in C code, #if and #ifdef should be avoided at all costs, except possibly in header files. Why would it be considered bad programming practice to use #ifdef in a .c file?
Hard to maintain. Better use interfaces to abstract platform specific code than abusing conditional compilation by scattering #ifdefs all over your implementation.
E.g.
void foo() {
#ifdef WIN32
// do Windows stuff
#else
// do Posix stuff
#endif
// do general stuff
}
Is not nice. Instead have files foo_w32.c and foo_psx.c with
foo_w32.c:
void foo() {
// windows implementation
}
foo_psx.c:
void foo() {
// posix implementation
}
foo.h:
void foo(); // common interface
Then have 2 makefiles1: Makefile.win, Makefile.psx, with each compiling the appropriate .c file and linking against the right object.
Minor amendment:
If foo()'s implementation depends on some code that appears in all platforms, E.g. common_stuff()2, simply call that in your foo() implementations.
E.g.
common.h:
void common_stuff(); // May be implemented in common.c, or maybe has multiple
// implementations in common_{A, B, ...} for platforms
// { A, B, ... }. Irrelevant.
foo_{w32, psx}.c:
void foo() { // Win32/Posix implementation
// Stuff
...
if (bar) {
common_stuff();
}
}
While you may be repeating a function call to common_stuff(), you can't parameterize your definition of foo() per platform unless it follows a very specific pattern. Generally, platform differences require completely different implementations and don't follow such patterns.
Makefiles are used here illustratively. Your build system may not use make at all, such as if you use Visual Studio, CMake, Scons, etc.
Even if common_stuff() actually has multiple implementations, varying per platform.
(Somewhat off the asked question)
I saw a tip once suggesting the use of #if(n)def/#endif blocks for use in debugging/isolating code instead of commenting.
It was suggested to help avoid situations in which the section to be commented already had documentation comments and a solution like the following would have to be implemented:
/* <-- begin debug cmnt if (condition) /* comment */
/* <-- restart debug cmnt {
....
}
*/ <-- end debug cmnt
Instead, this would be:
#ifdef IS_DEBUGGED_SECTION_X
if (condition) /* comment */
{
....
}
#endif
Seemed like a neat idea to me. Wish I could remember the source so I could link it :(
Because then when you do search results you don't know if the code is in or out without reading it.
Because they should be used for OS/Platform dependencies, and therefore that kind of code should be in files like io_win.c or io_macos.c
My interpretation of this rule:
Your (algorithmic) program logic should not be influenced by preprocessor defines. The functioning of your code should always be concise. Any other form of logic (platform, debug) should be abstractable in header files.
This is more a guideline than a strict rule, IMHO.
But I agree that c-syntax based solutions are preferred over preprocessor magic.
The conditional compilation is hard to debug. One has to know all the settings in order to figure out which block of code the program will execute.
I once spent a week debugging a multi-threaded application that used conditional compilation. The problem was that the identifier was not spelled the same. One module used #if FEATURE_1 while the problem area used #if FEATURE1 (Notice the underscore).
I a big proponent of letting the makefile handle the configuration by including the correct libraries or objects. Makes to code more readable. Also, the majority of the code becomes configuration independent and only a few files are configuration dependent.
A reasonable goal but not so great as a strict rule
The advice to try and keep preprocessor conditionals in header files is good, as it allows you to select interfaces conditionally but not litter the code with confusing and ugly preprocessor logic.
However, there is lots and lots and lots of code that looks like the made-up example below, and I don't think there is a clearly better alternative. I think you have cited a reasonable guideline but not a great gold-tablet-commandment.
#if defined(SOME_IOCTL)
case SOME_IOCTL:
...
#endif
#if defined(SOME_OTHER_IOCTL)
case SOME_OTHER_IOCTL:
...
#endif
#if defined(YET_ANOTHER_IOCTL)
case YET_ANOTHER_IOCTL:
...
#endif
CPP is a separate (non-Turing-complete) macro language on top of (usually) C or C++. As such, it's easy to get mixed up between it and the base language, if you're not careful. That's the usual argument against macros instead of e.g. c++ templates, anyway. But #ifdef? Just go try to read someone else's code you've never seen before that has a bunch of ifdefs.
e.g. try reading these Reed-Solomon multiply-a-block-by-a-constant-Galois-value functions:
http://parchive.cvs.sourceforge.net/viewvc/parchive/par2-cmdline/reedsolomon.cpp?revision=1.3&view=markup
If you didn't have the following hint, it will take you a minute to figure out what's going on: There are two versions: one simple, and one with a pre-computed lookup table (LONGMULTIPLY). Even so, have fun tracing the #if BYTE_ORDER == __LITTLE_ENDIAN. I found it a lot easier to read when I rewrote that bit to use a le16_to_cpu function, (whose definition was inside #if clauses) inspired by Linux's byteorder.h stuff.
If you need different low-level behaviour depending on the build, try to encapsulate that in low-level functions that provide consistent behaviour everywhere, instead of putting #if stuff right inside your larger functions.
By all means, favor abstraction over conditional compilation. As anyone who has written portable software can tell you, however, the number of environmental permutations is staggering. Some design discipline can help, but sometimes the choice is between elegance and meeting a schedule. In such cases, a compromise might be necessary.
Consider the situation where you are required to provide fully tested code, with 100% branch coverage etc. Now add in conditional compilation.
Each unique symbol used to control conditional compilation doubles the number of code variants you need to test. So, one symbol - you have two variants. Two symbols, you now have four different ways to compile your code. And so on.
And this only applies for boolean tests such as #ifdef. You can easily imagine the problem if a test is of the form #if VARIABLE == SCALAR_VALUE_FROM_A_RANGE.
If your code will be compiled with different C compilers, and you use compiler-specific features, then you may need to determine which predefined macros are available.
It's true that #if #endif does complicate the reading of the code. However I have seen a lot of real world code that have no issues using this and are still going strong. So there may be better ways to avoid using #if #endif but using them is not that bad if proper care is taken.