Compile different code on whether a function is available or not - c

Windows provides only GetTickCount up to Windows Vista and starting from that OS also GetTickCount64. How can I make a C program compile with calls to different functions?
How can I make a C compiler check whether a function is declared in the included header files and compile different portions of code depending on whether that particular function is available or not?
#if ??????????????????????????????
unsigned long long get_tick_count(void) { return GetTickCount64(); }
#else
unsigned long long get_tick_count(void) { return GetTickCount(); }
#endif
Looking for a working sample file not just hints.
Edit: I tried the following using gcc 3.4.5 from MinGW on a (64-bit) Windows 7 RC but it didn't help. If this is a MinGW problem, how can I work around this issue?
#include <windows.h>
#if (WINVER >= 0x0600)
unsigned long long get_tick_count(void) { return 600/*GetTickCount64()*/; }
#else
unsigned long long get_tick_count(void) { return 0/*GetTickCount()*/; }
#endif

Compile time selection of an API based on the target Windows version locks the built executable to that version and newer. This is a common technique for open source, *nix targeted projects where it is assumed that the user will configure the source kit for his platform and compile clean to install.
On Windows, this is not the usual technique because it isn't generally safe to assume that an end user will have a compiler at all, let alone want to deal with the intricacies of getting a project to build.
Often, just using the older API that is present in all versions of Windows is a sufficient answer. This is also simple: you just ignore the existence of a new API.
When that isn't sufficient, you use LoadLibrary() and GetProcAddress() to attempt to resolve the new symbol at run time. If it can't be resolved, then you fall back to the older API.
Here's a possible implementation. It detects the first call, and at attempts to load the library and resolve the name "GetTickCount64". In all calls, if the pointer to resolved symbol is non-null, it calls it and returns the result. Otherwise, it falls back on the older API, casting its return value to match the wrapper's type.
unsigned long long get_tick_count(void) {
static int first = 1;
static ULONGLONG WINAPI (*pGetTickCount64)(void);
if (first) {
HMODULE hlib = LoadLibraryA("KERNEL32.DLL");
pGetTickCount64 = GetProcAddressA(hlib, "GetTickCount64");
first = 0;
}
if (pGetTickCount64)
return pGetTickCount64();
return (unsigned long long)GetTickCount();
}
Note that I used the ...A flavors of the API functions since it is known that the library name and the symbol name will only be ASCII... if using this technique to load symbols from an installed DLL that might be in a folder named with non-ASCII characters, then you will need to worry about using a Unicode build.
This is untested, your mileage will vary, etc...

You can achieve it using preprocessor definitions in Windows headers.
unsigned long long
get_tick_count(void)
{
#if WINVER >= 0x0600
return GetTickCount64();
#else
return GetTickCount();
#endif
}

The right way to deal with this kind of problems is to check whether the function is available, but this cannot be done reliably during the project compilation. You should add a configuration stage, which details depend on your build tool, both cmake and scons, two cross platforms build tools, provide the facilities. Basically, it goes like this:
/* config.h */
#define HAVE_GETTICKSCOUNT64_FUNC
And then in your project, you do:
#include "config.h"
#ifdef HAVE_GETTICKSCOUNT64_FUNC
....
#else
...
#endif
Although it looks similar to the obvious way, it is much more maintainable in the long term. In particular, you should avoid as much as possible to depend on versions, and check for capabilities instead. Checking for versions quickly leads to complicated, interleaved conditionals, whereas with the technique above, everything is controlled from one config.h, hopefully generated automatically.
In scons and cmake, they will have tests which are run automatically to check whether the function is available, and define the variable in the config.h or not depending on the check. The fundamental idea is to decouple the capability detection/setting from your code.
Note that this can handle cases where you need to build binaries which run on different platforms (say run on XP even if built on Vista). It is just a matter of changing the config.h. If dones poperly, that's just a matter of changing the config.h (you could have a script which generate the config.h on any platform, and then gather config.h for windows xp, Vista, etc...). I don't think it is specific to unix at all.

Previous answers have pointed out checking for the particular #define that would be present for your particular case. This answer is for a more general case of compiling different code whether a function is available or not.
Rather than trying to do everything in the C file itself, this is the sort of thing where configure scripts really shine. If you were running on linux, I would point you to the GNU Autotools without hesitation. I know there's ports available for Windows, at least if you're using Cygwin or MSYS, but I have no idea how effective they are.
A simple (and very very ugly) script that could work if you have sh handy (I don't have a Windows setup handy to test this on) would look something like this:
#!/bin/sh
# First, create a .c file that tests for the existance of GetTickCount64()
cat >conftest.c <<_CONFEOF
#include <windows.h>
int main() {
GetTickCount64();
return 0;
}
_CONFEOF
# Then, try to actually compile the above .c file
gcc conftest.c -o conftest.out
# Check gcc's return value to determine if it worked.
# If it returns 0, compilation worked so set CONF_HASGETTICKCOUNT64
# If it doesn't return 0, there was an error, so probably no GetTickCount64()
if [ $? -eq 0 ]
then
confdefs='-D CONF_HASGETTICKCOUNT64=1'
fi
# Now get rid of the temporary files we made.
rm conftest.c
rm conftest.out
# And compile your real program, passing CONF_HASGETTICKCOUNT64 if it exists.
gcc $confdefs yourfile.c
This should be easy enough to translate into your scripting language of choice. If your program requires extra include paths, compiler flags, or whatever, make sure to add the necessary flags to both the test compile and the real compile.
'yourfile.c' would look something like this:
#include <windows.h>
unsigned long long get_tick_count(void) {
#ifdef CONF_HASGETTICKCOUNT64
return GetTickCount64();
#else
return GetTickCount();
#endif
}

You're asking about C but the question is tagged C++ as well ...
In C++ you would use SFINAE technique, see similar questions:
Is it possible to write a template to check for a function's existence?
But use preprocessor directives in Windows when provided.

If your code is going to run on OSes berfore Vista, you can't just compile your calls down to GetTickCount64(), because GetTickCount64() doesn't exist on an XP machine.
You need to determine at runtime which operating system you are running and then call the correct function. In general both calls need to be in the code.
Now this may not be true in your case if you don't really need to be able to call either GetTickCount64() on Vista+ machines and GetTickCount() on XP- machines. You may be able to just call GetTickCount() no matter what OS you're running on. There is no indication in the docs that I have seen that they are removing GetTickCount() from the API.
I would also point out that maybe GetTickCount() isn't the right thing to use at all. The docs say it returns a number of milliseconds, but in reality the precision of the function isn't even close to 1 millisecond. Depending on the machine (and there's no way to know at runtime AFAIK) the precision could be 40 milliseconds or even more. If you need 1 millisecond precision you should be using QueryPerformanceCounter(). In fact, there's really no practical reason to not use QPC in all cases where you'd use GetTickCount() anyway.

G'day,
Isn't NTDDI_VERSION what you need to look for?
Update: You want to check if WINVER is 0x0600. If it is then you're running Vista.
Edit: For the semantic pecker head, I meant running a compiler in a Vista environment. The question only refers to compiling, the question only refers to header files which are only used at compile time. Most people understood that it was intended that you're compiling in a Vista env. The question made no reference to runtime behaviour.
Unless someone is running Vista, and compiling for windows XP maybe?
Sheesh!
HTH
cheers,

The Microsoft compiler will define _WIN64 when compiling for 64 bit machines.
http://msdn.microsoft.com/en-us/library/b0084kay%28VS.80%29.aspx
#if defined(_WIN64)
unsigned long long get_tick_count(void) { return GetTickCount64(); }
#else
unsigned long long get_tick_count(void) { return GetTickCount(); }
#endif

If you have to support pre-Vista, I would stick with only using GetTickCount(). Otherwise you have to implement runtime code to check the Windows version and to call GetTickCount() on pre-Vista versions of Windows and GetTickCount64() on Vista and later. Since they return different sized values (ULONGLONG v DWORD) you'll also need to have separate handling of what they return. Using only GetTickCount() (and checking for overflow) will work for both situations, whereas using GetTickCount64() when it's available increases your code complexity and doubles the amount of code you have to write.
Stick with using only GetTickCount() until you can be sure your app no longer has to run on pre-Vista machines.

Maybe it is a good replacement for GetTickCount()
double __stdcall
thetimer (int value)
{
static double freq = 0;
static LARGE_INTEGER first;
static LARGE_INTEGER second;
if (0 == value)
{
if (freq == 0)
{
QueryPerformanceFrequency (&first);
freq = (double) first.QuadPart;
}
QueryPerformanceCounter (&first);
return 0;
}
if (1 == value)
{
QueryPerformanceCounter (&second);
second.QuadPart = second.QuadPart - first.QuadPart;
return (double) second.QuadPart / freq;
}
return 0;
}

Related

Conditional compilation based on functionality in Linux kernel headers

Consider the case where I'm using some functionality from the Linux headers exported to user space, such as perf_event_open from <linux/perf_event.h>.
The functionality offered by this API has changed over time, as members have been added to the perf_event_attr, such as perf_event_attr.cap_user_time.
How can I write source that compiles and uses these new functionalities if they are available locally, but falls back gracefully if they aren't and doesn't use them?
In particular, how can I detect in the pre-processor whether this stuff is available?
I've used this perf_event_attr as an example, but my question is a general one because structure members, new structures, definitions and functions are added all the time.
Note that here I'm only considering the case where a process is compiled on the same system that it will run on: if you want to compile on one host and run on another you need a different set of tricks.
Use the macros from /usr/include/linux/version.h:
#include <linux/version.h>
int main() {
#if LINUX_VERSION_CODE <= KERNEL_VERSION(2,6,16)
// ^^^^^^ change for the proper version when `perf_event_attr.cap_user_time` was introduced
// use old interface
#else
// use new interface
// use perf_event_attr.cap_user_time
#endif
}
You might go into this with the following assumptions
The features available in the header files correspond to those documented for the specific Linux version.
The kernel running during execution corresponds to <linux/version.h> during compilation
Ideally, I suggest not to rely on these two assumptions at all.
The first assumption fails primarily due to backports, e.g. in enterprise Linux versions based on ancient kernels. If you care about different versions, you probably care about them.
Instead, I recommend utilizing the methods for checking for struct members and include files in build system, e.g. for CMake:
CHECK_STRUCT_HAS_MEMBER("struct perf_event_attr" cap_user_time linux/perf_event.h HAVE_PERF_CAP_USER_TIME)
CHECK_INCLUDE_FILES can also be useful.
The second assumption can fail for many reasons, even if the binary is not moved between systems; E.g. updating the kernel but not recompiling the binary or simply booting another kernel. Specifically perf_event_open fails with EINVAL if a reserved bit is set. This allows you to retry with an alternative implementation not using the requested feature.
In short, statically check for the feature instead of the version. Dynamically, try and retry the legacy implementation if it failed.
Just in addition to other answers.
If you're aiming for supporting both cross-version and cross-distro code, you should also keep in mind that there are distros (Centos/RHEL) which pull some recent changes from new kernels to old. So you may encounter a situation in which you'll have LINUX_VERSION_CODE equal to some old kernel version, but there will be some changes (new fields in data structures, new functions, etc.) from recent kernel. In such case this macro is insufficient.
You can add something like (to avoid preprocessor errors in case it is not a Centos distro):
#ifndef RHEL_RELEASE_CODE
#define RHEL_RELEASE_CODE 0
#endif
#ifndef RHEL_RELEASE_VERSION
#define RHEL_RELEASE_VERSION(x,y) 1
#endif
And use it with > or >= where you need:
#if LINUX_VERSION_CODE >= KERNEL_VERSION(4,3,0) || RHEL_RELEASE_CODE > RHEL_RELEASE_VERSION(7,2)
...
for Centos/RHEL custom kernels support.
P.S. of course it's necessary to examine an appropriate versions of Centos/RHEL, and understand when and what exactly has changed in the code sections that affect you.

C/pre-processor: detect if a __builtin function is available

Is it possible to somehow determine whether an intrinsic function, such as __builtin_bswap16 is provided by the compiler? Preferably, I would like to be able to determine whether this function exists using just preprocessor.
In my particular case, I was using __builtin_bswap16 / 32 / 64 functions in my code which worked fine with GCC 4.x when compiling for 32-bit. Later I switched to a 64-bit Linux and noticed that __builtin_bswap16 suddenly disappeared - I received a linker error:
"undefined reference to `__builtin_bswap16'".
I guess this has something to do with the availability of certain ASM operations in 64-bit mode.
On a later occasion I was trying to compile this code on a different machine where unfortunately only an older version of GCC is installed and does not support these functions at all.
I would like to make this code compilable everywhere, using __builtin_bswap functions if provided, and fall back to hand-coded byteswap routine if not. Is it possible to achieve this somehow with just preprocessor?
My obvious attempt, e.g.:
...
#define MYBSWAP16(v) (v>>8)|(v<<8)
#ifdef __builtin_bswap16
printf("bswap16 is defined : %04x\n", __builtin_bswap16(0x1234));
#else
printf("bswap16 is not defined : %04x\n", MYBSWAP16(0x1234) );
#endif
...
was not successful, as __builtin_bswap16/32/64 are always evaluated to be undefined. Is there any way to make it work automatically within the C source, or is the only way to manually define constants in the Makefile, e.g. HAVE_BSWAP and pass them via -D option?
Please note that my question is not necessarily specific to __builtin_bswap, I'm looking for a general way to detect if the certain functions are available.
Unavailability of __builtin_bswap16 is a gcc bug which was fixed in gcc 4.8.
Sincw it is missing from some versions of gcc you can always add it to your code yourself :
static inline unsigned short __builtin_bswap16(unsigned short a)
{
return (a<<8)|(a>>8);
}

Detecting users OS in terminal application, in C

How do I determine a user's OS in terminal application, in C?
For example, in the code below, what should I replace windows and linux with?
/* pseudo code */
if(windows)
{system(cls)}
else if(linux)
{system(clear)}
else{...}
I should mention that I am a beginner at C, and need something like this so my code can work on windows and/or linux, without making separate source for each.
Typically, this is done with macros in the build system (since you have to BUILD the code for each system anyway.
e.g. gcc -DLINUX myfile.c
and then in myfile.c
#ifdef LINUX
... do stuff for linux ...
#else if defined(WINDOWS)
... do something for windows ...
#else if ... and so on.
...
#endif
(Most of the time, you can find some way that doesn't actually require the addition of a -D<something> on the command line, by using predefined macros for the tools you are using to compile for that architecture).
Alternatively, you ca do the same thing, but much quicker and better (but not 100% portable) by printing the ANSI escape sequence for "clear screen":
putstr("\033" "2J");
yes, that's two strings, because if you write "\0332J" the compile will use the character 0332, not character 033, followed by the digit 2. So two strings next to each other will do the trick.
I believe you can avoid runtime check by specializing your 'functions' during compilation. So, how about this then:
#ifdef WIN32
CLEAR = cls
#elif __linux__
CLEAR = clear
#endif
Predefs vary from compiler to compiler, so here's a good list to have: http://sourceforge.net/p/predef/wiki/OperatingSystems/
It is probably better to detect the environment at compile time rather than runtime. With compiled languages like C you aren't going to have the same compiler output running on different platforms as you would with a lanugage such as Java so you don't need to do this kind of check at runtime.
This is the header I use to work out what platform my code is being compiled on. It will define different macros depending on the OS (as well as other things).
Something like this in use:
#if defined(UTIL_PLATFORM_WINDOWS)
printf("windows\n");
#elif defined(UTIL_PLATFORM_UNIXLIKE)
printf("Unix\n");
#endif

Most standard way to select a function name depending on platform?

I am currently using the popen function in code that is compiled by two compilers: MS Visual Studio and gcc (on linux). I might want to add gcc (on MinGW) later.
The function is called popen for gcc, but _popen for MSVS, so i added the following to my source code:
#ifdef _MSC_VER
#define popen _popen
#define pclose _pclose
#endif
This works, but i would like to understand whether there exists a standard solution for such problems (i recall a similar case with stricmp/strcasecmp). Specifically, i would like to understand the following:
Is _MSC_VER the right flag to depend on? I chose it because i have the impression that linux environment is "more standard".
If i put these #define's in some header file, is it important whether i #include it before or after stdio.h (for the case of popen)?
If _popen is defined as a macro itself, is there a chance my #define will fail? Should i use a "new" token like my_popen instead, for that reason or another?
Did someone already do this job for me and made a good "portability header" file that i can use?
Anything else i should be aware of?
Better to check for a windows-specific define (_WIN32 perhaps) because mingw won't have it either. popen() is standardised (it's a part of the Single UNIX® Specification v2)
No; so long as the macro is defined before its first use it does not matter if _popen() is not defined until later.
No; what you have is fine even if _popen is a macro.
It's been done many times but I don't know of a freely-licensed version you can use.
The way you are doing it is fine (with the #ifdef etc) but the macro that you test isn't. popen is something that depends on your operating system and not your compiler.
I'd go for something like
#if defined(_POSIX_C_SOURCE) && (_POSIX_C_SOURCE >= 2)
/* system has popen as expected */
#elif defined(YOUR_MACRO_TO DETECT_YOUR_OS)
# define popen _popen
# define pclose _pclose
#elif defined(YOUR_MACRO_TO DETECT_ANOTHER_ONE)
# define popen _pOpenOrSo
# define pclose _pclos
#else
# error "no popen, we don't know what to do"
#endif
_MSC_VER is the correct macro for detecting the MSVC compiler. You can use __GNUC__ for GCC.
If you are going to use popen as your macro ID, I suggest you #include it after, because of 3.
If you #include it after stdio.h, it should work AFAIK, but better safe than sorry, no? Call it portable_popen or something.
Many projects (including some of mine) have a portability header, but it's usually better to roll your own. I'm a fan of doing things yourself if you have the time. Thus you know the details of your code (easier to debug if things go wrong), and you get code that is tailored to your needs.
Not that I know of. I do stuff like this all the time, without problems.
Instead of ending up with cluttered files containing #ifdef..#else..#endif blocks, I'd prefer a version using different files for different platforms:
put the OS dependent definitions in one file per platform and #define a macro my_popen
#include this file in your platform-agnostic code
never call the OS functions directly, but the #define that you created (i.e. my_popen)
depending on your OS, use different headers for compilation (e.g. config/windows/mydefines.h on windows and config/linux/mydefines.h on linux, so set the include path appropriate and always #include "mydefines.h")
That's a much cleaner approach than having the OS decision in the source itself.
If the methods you're calling behave different between windows and linux, decide which one shall be the behavior you're using (i.e. either always windows behavior or always linux behavior) and then create wrapper methods to achieve this. For that, you'll also need not only two mydefines.h files but also to myfunctions.c files that reside in the config/OSTYPE directories.
Doing it that way, you also get advantages when it comes to diff the linux and the windows version: you could simply diff two files while doing a diff on the linux and windows blocks of the same file could be difficult.

How to solve this compatibility-problem regarding large file support?

A library using off_t as a parameter for one function (seek). Library and application are compiled differently, one with large file support switched off, the other with large file support. This situation results in strange runtime errors, because both interpret off_t differently. How can the library check at runtime the size of off_t for the app? Or is there another solution, so that at least the user gets a meaningful error?
EDIT: The library (programmed in c and with autoconf) already exists and some third-party application use it. The library can be compiled with large file support (by default via AC_SYS_LARGEFILE). It is multiplatform, not only linux. How can be detected/prevented that installed applications will be broken by the change in LFS?
You could add an API to the library to return the sizeof(off_t) and then check it from the client. Alternatively the library could require every app to provide the API in order to successfully link:
library.c:
size_t lib_get_off_t_size (void)
{
return (sizeof(off_t));
}
client.c (init_function):
if (lib_get_off_t_size() != sizeof(off_t) {
printf("Oh no!\n");
exit();
}
If the library has an init function then you could put the check there, but then the client would have to supply the API to get the size of its off_t, which generally isn't how libraries work.
On Linux, when the library is compiled with large file support switched on, off_t is defined to be the same as off64_t. So, if the library is the one compiled with large file support, you could change its interface to always use off64_t instead of off_t (this might need _LARGEFILE64_SOURCE) and completely avoid the problem.
You can also check whether the application is being compiled with large file support or not (by seeing if _FILE_OFFSET_BITS is not defined or 32) and refuse compiling (with #error) if it's being compiled the wrong way; see /usr/include/features.h and Feature Test Macros.
As said before, the library will not be able to know how the application (being client to the library) is compiled, but the other way round has to work. Besides, I think you are talking about dynamic linking, since static linking certainly would not have different switches at same build time.
Similar to the already given answer by "Andrew Johnson", the library could provide a method for finding out whether it was compiled with large file support or not. Knowing that such build-time switches are mostly done with defines in C, this could be looking like this:
//in library:
BOOL isLargeFileSupport (void)
{
#ifdef LARGE_FILE_SUPPORT
return TRUE;
#else
return FALSE;
#endif
}
The application then knows how to handle file sizes reported by that lib, or can refuse to work when incompatible:
//in application
BOOL bLibLFS = lib_isLargeFileSupport();
BOOL bAppLFS = FALSE;
#ifdef LARGE_FILE_SUPPORT
bAppLFS = TRUE;
#endif
if (bLibLFS != bAppLFS)
//incompatible versions, bail out
exit(0);

Resources