Portability of dlfunc? - c

I'm reading through the manpage for dlopen and friends on FreeBSD. I'm working on a cross-platform application that is using shared libraries for loadable plugins. I've never done this before, but I think I have a decent grasp of how it works. The manpage mentions dlsym(), which appears to be the common means of getting a function pointer from a shared library, and dlfunc(), which supposedly avoids compiler complaints about casting a void* to a function pointer. Is there a reason dlsym() is more common (portability?) ? I'm wondering whether I should use dlfunc() to avoid compiler problems, or use dlsym(). Is dlfunc() portable?

You can't expect to have a dlfunc provided on other UNIXes, but it's implementation is straightforward and portable. You can do something like
# configure.ac
AC_SYSTEM_EXTENSIONS
AC_CHECK_FUNCS([dlfunc])
// some common header
#include "config.h"
#ifndef HAVE_DLFUNC
/* copied from FreeBSD, source/include/dlfcn.h */
struct __dlfunc_arg {
int __dlfunc_dummy;
};
typedef void (*dlfunc_t)(struct __dlfunc_arg);
dlfunc_t dlfunc(void *restrict handle, void *restrict symbol);
#endif
// some source file
#include "config.h"
#ifndef HAVE_DLFUNC
/* copied from FreeBSD, lib/libc/gen/dlfunc.c */
dlfunc_t dlfunc(void *restrict handle, void *restrict symbol) {
union {
void *d;
dlfunc_t f;
} rv;
rv.d = dlsym(handle, symbol);
return rv.f;
}
#endif
if you are using Autoconf, and other build+configuration systems probably have similar abilities. (dlsym is much more widely available.)
That being said, I think the compiler warning is silly – the C standard does not, but POSIX guarantees that void * pointers can safely represent all function pointers…

When you say cross platform, do you mean cross-POSIX platforms or do you need Windows support too?
If you're working in C++ you could have a look at the Boost.Extension proposal code. This takes care of Windows vs. UNIX portability.
If you're looking for UNIX-only advice, have a look at the Single UNIX Specification.
As far as I know, dlsym is the standard UNIX way to do things. Windows has an equivalent but completely different way of doing things.

Related

Handling OS function name mismatch with macros

My current code is:
int my_func(int a)
{
#ifdef _WIN32
return _func(a);
#else
return func(a);
#endif
}
But I think it would be better if it was something like:
#ifdef _WIN32
#define common_func(a) _func(a);
#else
#define common_func(a) func(a);
#endif
int my_func(int a)
{
return common_func(a);
}
Now, I have never done this and I don't know what I am doing. The K&R cat() example is confusing. I basically just want to get the #ifdefs out of the function because it would get too messy because I have to run the function several times.
func() and _func() are the same function, just Windows thought it would be a great idea to prepend it with an underscore. So it's not even a macro function but more like a function alias maybe. Or a wrapper?
Does this impact performance? Does the generated code differ from version 1?
Is there some trick, since the difference is just the underscore?
I want to do it correctly and properly.
Please help.
Windows thought it would be a great idea to prepend it with an underscore. So it's not even a macro function but more like a function alias maybe. Or a wrapper?
Most likely it's simply a differently-named function that has the same specifications. Probably not a wrapper or alias. Functions such as POSIX open(), read(), and write() are not defined by the C language specification, unlike corresponding fopen(), fread(), and fwrite(). Generally speaking, C implementations must provide the latter group, by they have no obligation to provide the former group.
Does this impact performance?
Conditional compilation directives such as #ifdef are evaluated at compile time. They themselves have no runtime impact at all, and they are pretty cheap at compile time.
Does the generated code differ from version 1?
No. The two versions of your code are 100% equivalent.
Is there some trick, since the difference is just the underscore?
If there are several functions you want to use where the Windows versions differ in name from (I suppose) the POSIX version by a leading underscore, then you might think it worth your while to define a common macro that you can reuse for all of them, instead of rolling a separate thing for each individual function. Maybe something like this:
#ifdef _WIN32
#define POSIX_MANGLE(f) _ ## f
#else
#define POSIX_MANGLE(f) f
#endif
You would use that something like so:
void do_something(int a) {
POSIX_MANGLE(func1)(a);
POSIX_MANGLE(func2)(a);
}
On Windows (technically, wherever _WIN32 is defined), that is then equivalent to ...
void do_something(int a) {
_func1(a);
_func2(a);
}
Anywhere else, it is equivalent to ...
void do_something(int a) {
func1(a);
func2(a);
}

Why do we need feature test macros?

By reading What does -D_XOPEN_SOURCE do/mean? , I understand that how to use feature test macros.
But I still don't understand why do we need it, I mean, can we just enable all features available? Then the doc writes like this: this function only available in Mac/BSD, that function only available in Linux, if you use it, then your program can only be running on that system.
So why do we need a feature test macro in the first place?
why do we need it, I mean, can we just enable all features available?
Imagine some company has written perfectly fine super portable code roughly like the following:
#include <stdlib.h>
struct someone_s { char name[20]; };
/// #brief grants Plant To someone
int grantpt(int plant_no, struct someone_s someone) {
// some super plant granting algorithm here
return 0;
}
int main() {
// some program here
struct someone_s kamil = { "Kamil" };
return grantpt(20, kamil);
}
That program is completely fine and all is working fine, and that program is very C compatible, thus should be portable to anywhere. Now imagine for a moment that _XOPEN_SOURCE does not exist! A customer receives sources of that program and tries to compile and run it on his bleeding edge Unix computer with certified C compiler on a certified POSIX system, and he receives an error that that company has to fix, and in turn has to pay for:
/tmp/1.c:7:9: error: conflicting types for ‘grantpt’; have ‘int(struct someone_s, int)’
7 | int grantpt(struct someone_s someone, int plant_no) {
| ^~~~~~~
In file included from /tmp/1.c:2:
/usr/include/stdlib.h:977:12: note: previous declaration of ‘grantpt’ with type ‘int(int)’
977 | extern int grantpt (int __fd) __THROW;
| ^~~~~~~
Looks like a completely random name picked for a function is already taken in POSIX - grantpt().
When introducing new symbols that are not in reserved space, standards like POSIX can't just "add them" and expect the world not to protest - conflicting definitions can and will and do break valid programs. To battle the issue feature_test_macros were introduced. When a program does #define _XOPEN_SOURCE 500 it means that it is prepared for the POSIX standard and there are no conflicts between the code and symbols introduced by POSIX in that version.
Feature test macros are not just "my program wants to use these functions", it is most importantly "my program has no conflicts with these functions", which is way more important, so that existing programs continue to run.
The theoretical reason why we have feature selection macros in C, is to get the C library out of your way. Suppose, hypothetically, you want to use the name getline for a function in your program. The C standard says you can do that. But some operating systems provide a C library function called getline, as an extension. Its declaration will probably clash with your definition. With feature selection macros, you can, in principle, tell those OSes' stdio.hes not to declare their getline so you can use yours.
In practice these macros are too coarse grained to be useful, and the only ones that get used are the ones that mean "give me everything you got", and people do exactly what you speculate they could do, in the documentation.
Newer programming languages (Ada, C++, Modula-2, etc.) have a concept of "modules" (sometimes also called "namespaces") which allow the programmer to give an exact list of what they want from the runtime library; this works much better.
Why do we need feature test macros?
You use feature test macros to determine if the implementation supports certain features or if you need to select an alternative way to implement whatever it is you're implementing.
One example is the set of *_s functions, like strcpy_s:
errno_t strcpy_s(char *restrict dest, rsize_t destsz, const char *restrict src);
// put this first to signal that you actually want the LIB_EXT1 functions
#define __STDC_WANT_LIB_EXT1__ 1
#include <string.h>
Then in your code:
#ifdef __STDC_LIB_EXT1__
errno_t err = strcpy_s(...); // The implementation supports it so you can use it
#else
// Not supported, use an alternative, like your own implementation
// or let it fail to compile.
#fi
can we just enable all features available?
When it comes to why you need to tell the implementation that you actually want a certain set of features (instead of it just including them all automatically) I have no better answer than that it could possibly make the programs slower to compile and could possibly also make it produce bigger executables than necessary.
Similarly, the implementation does not link with every library it has available, but only the most basic ones. You have to tell it what you need.
In theory, you could create header file which defines all the possible macros that you've found that will enable a certain set of features.
#define _XOPEN_SOURCE 700
#define __STDC_LIB_EXT1__ 1
...
But as you see with _XOPEN_SOURCE, there are different releases, and you can't enable them all at the same time, you need to select one.

C99: Custom implementations of non-standard functions

In my project, I want to use a non-standard library function, which which may not be defined on certain systems. In my case, it is strlcpy.
From man strcpy:
Some systems (the BSDs, Solaris, and others) provide the following function:
size_t strlcpy(char *dest, const char *src, size_t size);
...
My system does not implement strlcpy, so I rolled my own. All is well, until compiling on a system that already has strlcpy defined: error: conflicting types for strlcpy.
My question: how can I implement a function that may cause naming conflicts down the road? Can I use some directive like #ifdef some_macro(strlcpy), or am I simply left with renaming strlcpy to my_strlcpy?
check if you need to include.
Example - the list of systems will be much longer I think
#if !defined(__FreeBSD__)
#include "mystring.h"
#endif

Makefile with unimplemented functions in library

First of all, I've been searching for an answer here and I haven't been able to find one. If this question is really replicated please redirect me to the right answer and I'll delete it right away. My problem is that I'm making a C library that has a few unimplemented functions in the .h file, that will need to be implemented in the main.c that calls this library. However, there is an implemented function in the library that calls them. I have a makefile for this library that gives me "undefined reference to" every function that's not implemented, so the when I try to link the .o s in the main.c file that does have those implementations I can't, because the original library wasn't able to compile because of these errors.
My question is, are there any flags that I could put in the makefile so that it will ignore the unimplemented headers or look for them once the library is linked?
This is a very old-fashioned way of writing a library (but I've worked on code written like that). It does not work well with shared libraries, as you are now discovering.
If you can change the library design
Your best bet is to rearrange the code so that the 'missing functions' are specified as callbacks in some initialization function. For example, you might currently have a header a bit like:
#ifndef HEADER_H_INCLUDED
#define HEADER_H_INCLUDED
extern int implemented_function(int);
extern int missing_function(int);
#endif
I'm assuming that your library contains implemented_function() but one of the functions in the library makes a call to missing_function(), which the user's application should provide.
You should consider restructuring your library along the lines of:
#ifndef HEADER_H_INCLUDED
#define HEADER_H_INCLUDED
typedef int (*IntegerFunction)(int);
extern int implemented_function(int);
extern IntegerFunction set_callback(IntegerFunction);
#endif
Your library code would have:
#include "header.h"
static IntegerFunction callback = 0;
IntegerFunction set_callback(IntegerFunction new_callback)
{
IntegerFunction old_callback = callback;
callback = new_callback;
return old_callback;
}
static int internal_function(int x)
{
if (callback == 0)
...major error...callback not set yet...
return (*callback)(x);
}
(or you can use return callback(x); instead; I use the old school notation for clarity.) Your application would then contain:
#include "header.h"
static int missing_function(int x);
int some_function(int y)
{
set_callback(missing_function);
return implemented_function(y);
}
An alternative to using a function like set_callback() is to pass the missing_function as a pointer to any function that ends up calling it. Whether that's reasonable depends on how widely used the missing function is.
If you can't change the library design
If that is simply not feasible, then you are going to have to find the platform-specific options to the code that builds shared libraries so that the missing references do not cause build errors. The details vary widely between platforms; what works on Linux won't work on AIX and vice versa. So you will need to clarify your question to specify where you need the solution to work.

Can we change the size of size_t in C?

Can we change the size of size_t in C?
No. But why would you even want to do it?
size_t is not a macro. It is a typedef for a suitable unsigned integer type.
size_t is defined in <stddef.h> (and other headers).
It probably is typedef unsigned long long size_t; and you really should not even think about changing it. The Standard Library uses it as defined by the Standard Library. If you change it, as you cannot change the Standard Library, you'll get all kinds of errors because your program uses a different size for size_t than the Standard Library. You can no longer call malloc(), strncpy(), snprintf(), ...
If you want to fork Linux or NetBSD, then "Yes"
Although you can redefine macros this one is probably a typedef.
If you are defining an environment then it's perfectly reasonable to specify size_t as you like. You will then be responsible for all the C99 standard functions for which conforming code expects size_t.
So, it depends on your situation. If you are developing an application for an existing platform, then the answer is no.
But if you are defining an original environment with one or more compilers, then the answer is yes, but you have your work cut out for you. You will need an implementation of all the library routines with an API element of size_t which can be compiled with the rest of your code with the new size_t typedef. So, if you fork NetBSD or Linux, perhaps for an embedded system, then go for it. Otherwise, you may well find it "not worth the effort".

Resources