I want to write a portable way to get the free disk space. On Windows, I use GetDiskFreeSpaceEx, and on Linux, the header <sys/statvfs.h> contains the function statvfs64() I can use.
My question is, on which systems I can assume that this header exists. Is there a macro I can check? Something like
#ifdef _MSC_VER
#include <windows.h>
#else
#ifdef STATVFS_IS_AVAILABLE
#include <sys/statvfs.h>
#endif
#endif
Generally, you would use autotools for stuff like that. autoconf creates a config.h header which defines a HAVE_STATVFS or so macro if you define a suitable configuration test.
However, due to the otherwise huge availability of <sys/statvfs.h>, you can also less portably simply test for _MSC_VER, as you just did.
Related
I'm writing the header of a kernel module. The header is known to the module, but also used by callers in user space. This is a problem, because some types used should be included from different files depending on whether the header is currently in user or kernel space (or so this question makes me think).
I don't want to maintain two separate header files, so I've been thinking of a solution like this:
#ifndef IN_KERNEL
#include <stdint.h>
#else
#include <linux/types.h>
With IN_KERNEL being defined somewhere in my kernel code. Is there a preprocessor constant that already does this?
From reading this, it seems that an existing constant used for this purpose is __KERNEL__.
#ifndef __KERNEL__
#include <stdint.h>
#else
#include <linux/types.h>
#endif
I have a header foo.h file that declares a function prototype
void foo(FILE *f);
/* ... Other things that don't depend on FILE ... */
among other things.
Now obviously, to use this header, I need to do the following
#include <stdio.h>
#include "foo.h"
I would like to surround this particular prototype with something like the following:
#ifdef _STDIO_H
void foo(FILE *f);
#endif
/* ... Other things that don't depend on FILE ... */
so that I can #include "foo.h" without worrying about #include <stdio.h> in cases where I don't need that particular function.
Is the #ifdef _STDIO_H the way to go if I want my code to be portable and standards compliant?
I could find no mention of _STDIO_H in the standards document, but I see it is used in a variety of C libraries. Should I rather use something that I know to be defined in stdio.h, like EOF?
A related question: What do you do for other standard C headers, like stdlib.h?
<stdio.h> and <stdlib.h> are part of the C99 (and C11) standards. So every (hosted) standard conforming C implementation have them.
On most practical implementations, they are header files with some include guards.
A standard conforming implementation might process #include <stdio.h> very specifically, e.g. by using some database. I know no such implementation.
So simply add
#include <stdio.h>
near the top of your header file, something like
// file foo.h
#ifndef FOO_INCLUDED
#define FOO_INCLUDED
#include <stdio.h>
// other includes ...
// ...
// other stuff
#endif /* FOO_INCLUDED */
Alternatively, you could not care and document that #include "foo.h" requires a previous #include <stdio.h>; any sensible developer using a good-enough C implementation would be able to take care of that.
Actually, I was wrong in my comment on Alter Mann's deleted answer. It looks like stdin is required to be some macro, and then you might use #ifdef stdin ... endif as Alter Mann correctly answered. I believe it is not very readable, and you just want to have <stdio.h> included, either by including it yourself in your foo.h or by requiring it in your documentation.
Contrarily to C++ standard headers, C standard headers are in practice quite quick to be compiled, so I don't think it is worth to optimize the unusual case when <stdio.h> has not been included.
Open your stdio.h file (for your compiler) and see whether it has _STDIO_H or similar definition.
Is there is a common practice for userspace programs to include ioctl codes used in a kernel module.
mydev.h:
#ifndef MYDEV_H
#define MYDEV_H
#define <linux/ioctl.h>
#define MYDEV_IOC_MAGIC 'C'
#define MYDEV_IOC_FOO _IO(MYDEV_IOC_MAGIC, 0)
#define MYDEV_IOC_BAR _IOW(MYDEV_IOC_MAGIC, 1, int)
#endif
I typically put my ioctl codes in a header which I include in my kernel module code. I considered just including this header in my userspace applications, but I realized that the linux/ioctl.h file path may not exist on most systems (e.g. systems with no exported kernel headers).
The solution seems to be to change the include line to: #include <sys/ioctl.h>; but then I couldn't use this header for my kernel module.
Is there a better solution to this problem, or is it common to have two separate but nearly identical header files?
You could leverage the _KERNEL_ macro.
#ifdef __KERNEL__
#include <linux/ioctl.h>
#else
#include <sys/ioctl.h>
#endif
You may have to abstract the actual ioctl values too, but you get the idea.
Let's assume I define BAR in foo.h. But foo.h might not exist. How do I include it, without the compiler complaining at me?
#include "foo.h"
#ifndef BAR
#define BAR 1
#endif
int main()
{
return BAR;
}
Therefore, if BAR was defined as 2 in foo.h, then the program would return 2 if foo.h exists and 1 if foo.h does not exist.
In general, you'll need to do something external to do this - e.g. by doing something like playing around with the search path (as suggested in the comments) and providing an empty foo.h as a fallback, or wrapping the #include inside a #ifdef HAS_FOO_H...#endif and setting HAS_FOO_H by a compiler switch (-DHAS_FOO_H for gcc/clang etc.).
If you know that you are using a particular compiler, and portability is not an issue, note that some compilers do support including a file which may or may not exist, as an extension. For example, see clang's __has_include feature.
Use a tool like GNU Autoconf, that's what it's designed for. (On windows, you may prefer to use CMake).
So in your configure.ac, you'd have a line like:
AC_CHECK_HEADERS([foo.h])
Which, after running configure, would define HAVE_FOO_H, which you can test like this:
#ifdef HAVE_FOO_H
#include "foo.h"
#else
#define BAR 1
#endif
If you intend to go down the autotools route (that is autoconf and automake, because they work well together), I suggest you start with this excellent tutorial.
Should header files have #includes?
I'm generally of the opinion that this kind of hierarchical include is bad. Say you have this:
foo.h:
#include <stdio.h> // we use something from this library here
struct foo { ... } foo;
main.c
#include "foo.h"
/* use foo for something */
printf(...)
The day main.c's implementation changes, and you no longer use foo.h, the compilation will break and you must add <stdio.h> by hand.
Versus having this:
foo.h
// Warning! we depend on stdio.h
struct foo {...
main.c
#include <stdio.h> //required for foo.h, also for other stuff
#include "foo.h"
And when you stop using foo, removing it breaks nothing, but removing stdio.h will break foo.h.
Should #includes be banned from .h files?
You've outlined the two main philosophies on this subject.
My own opinion (and I think that's all that one can really have on this) is that headers should as self-contained as possible. I don't want to have to know all the dependencies of foo.h just to be able to use that header. I also despise having to include headers in a particular order.
However, the developer of foo.h should also take responsibility for making it as dependency-free as possible. For example, the foo.h header should be written to be free of a dependency on stdio.h if that's at all possible (using forward declarations can help with that).
Note that the C standard forbids a standard header from including another standard header, but the C++ standard doesn't. So you can see the problem you describe when moving from one C++ compiler version to another. For example, in MSVC, including <vector> used to bring in <iterator>, but that no longer occurs in MSVC 2010, so code that compiled before might not any more becuase you may need to specifically include <iterator>.
However, even though the C standard might seem to advocate the second philosophy, note that it also mandates that no header depend on another and that you can include headers in any order. So you get the best of both worlds, but at a cost of complexity to the implementers of the C library. They have to jump through some hoops to do this (particularly to support definitions that can be brought in through any of several headers, like NULL or size_t). I guess that the people who drafted the C++ standard decided adding that complexity to impersonators was no longer reasonable (I don't know to what degree C++ library implementors take advantage of the 'loophole' - it looks like MS might be tightening this up, even if it's not technically required).
My general recommendations are:
A file should #include what it needs.
It should not expect something else to #include something it needs.
It should not #include something it doesn't need because something else might want it.
The real test is this: you should be able to compile a source file consisting of any single #include and get no errors or warnings beyond "There is no main()". If you pass this test, then you can expect anything else to be able to #include your file with no problems. I've written a short script called "hcheck" which I use to test this:
#!/usr/bin/env bash
# hcheck: Check header file syntax (works on source files, too...)
if [ $# -eq 0 ]; then
echo "Usage: $0 <filename>"
exit 1
fi
for f in "$#" ; do
case $f in
*.c | *.cpp | *.cc | *.h | *.hh | *.hpp )
echo "#include \"$f\"" > hcheck.cc
printf "\n\033[4mChecking $f\033[0m\n"
make -s $hcheck.o
rm -f hcheck.o hcheck.cc
;;
esac
done
I'm sure there are several things that this script could do better, but it should be a good starting point.
If this is too much, and if your header files almost always have corresponding source files, then another technique is to require that the associated header be the first #include in the source file. For example:
Foo.h:
#ifndef Foo_h
#define Foo_h
/* #includes that Foo.h needs go here. */
/* Other header declarations here */
#endif
Foo.c:
#include "Foo.h"
/* other #includes that Foo.c needs go here. */
/* source code here */
This also shows the "include guards" in Foo.h that others mentioned.
By putting #include "Foo.h" first, Foo.h must #include its dependencies, otherwise you'll get a compile error.
Well, main shouldn't rely on "foo.h" in the first place for stdio. There's no harm in including something twice.
Also, perhaps foo.h doesn't really need stdio. What's more likely is that foo.c (the implementation) needs stdio.
Long story short, I think everyone should just include whatever they need and rely on include guards.
Once you get into projects with hundreds or thousands of header files, this gets untenable. Say I have a header file called "MyCoolFunction.h" that contains the prototype for MyCoolFunction(), and that function takes pointers to structs as parameters. I should be able to assume that including MyCoolFunction.h will include everything that's necessary and allow me to use that function without looking in the .h file to see what else I need to include.
If the header file needs a specific header, add it to the header file
#ifndef HEADER_GUARD_YOUR_STYLE
#define HEADER_GUARD_YOUR_STYLE
#include <stdio.h> /* FILE */
int foo(FILE *);
#endif /* HEADER GUARD */
if the code file doesn't need a header, don't add it
/* #include <stdio.h> */ /* removed because unneeded */
#include <stddef.h> /* NULL */
#include "header.h"
int main(void) {
foo(NULL);
return 0;
}
Why don't you #include stuff in the *.c file corresponding to the header?