I have an interface with which I want to be able to statically link modules. For example, I want to be able to call all functions (albeit in seperate files) called FOO or that match a certain prototype, ultimately make a call into a function in the file without a header in the other files. Dont say that it is impossible since I found a hack that can do it, but I want a non hacked method. (The hack is to use nm to get functions and their prototypes then I can dynamically call the function). Also, I know you can do this with dynamic linking, however, I want to statically link the files. Any ideas?
Put a table of all functions into each translation unit:
struct functions MOD1FUNCS[]={
{"FOO", foo},
{"BAR", bar},
{0, 0}
};
Then put a table into the main program listing all these tables:
struct functions* ALLFUNCS[]={
MOD1FUNCS,
MOD2FUNCS,
0
};
Then, at run time, search through the tables, and lookup the corresponding function pointer.
This is somewhat common in writing test code. e.g., you want to call all functions that start with test_. So you have a shell script that grep's through all your .C files and pulls out the function names that match test_.*. Then that script generates a test.c file that contains a function that calls all the test functions.
e.g., generated program would look like:
int main() {
initTestCode();
testA();
testB();
testC();
}
Another way to do it would be to use some linker tricks. This is what the Linux kernel does for its initialization. Functions that are init code are marked with the qualifier __init. This is defined in linux/init.h as follows:
#define __init __section(.init.text) __cold notrace
This causes the linker to put that function in the section .init.text. The kernel will reclaim memory from that section after the system boots.
For calling the functions, each module will declare an initcall function with some other macros core_initcall(func), arch_initcall(func), et cetera (also defined in linux/init.h). These macros put a pointer to the function into a linker section called .initcall.
At boot-time, the kernel will "walk" through the .initcall section calling all of the pointers there. The code that walks through looks like this:
extern initcall_t __initcall_start[], __initcall_end[], __early_initcall_end[];
static void __init do_initcalls(void)
{
initcall_t *fn;
for (fn = __early_initcall_end; fn < __initcall_end; fn++)
do_one_initcall(*fn);
/* Make sure there is no pending stuff from the initcall sequence */
flush_scheduled_work();
}
The symbols __initcall_start, __initcall_end, etc. get defined in the linker script.
In general, the Linux kernel does some of the cleverest tricks with the GCC pre-processor, compiler and linker that are possible. It's always been a great reference for C tricks.
You really need static linking and, at the same time, to select all matching functions at runtime, right? Because the latter is a typical case for dynamic linking, i'd say.
You obviusly need some mechanism to register the available functions. Dynamic linking would provide just this.
I really don't think you can do it. C isn't exactly capable of late-binding or the sort of introspection you seem to be requiring.
Although I don't really understand your question. Do you want the features of dynamically linked libraries while statically linking? Because that doesn't make sense to me... to static link, you need to already have the binary in hand, which would make dynamic loading of functions a waste of time, even if you could easily do it.
Related
I have heard that, when you have just 1 (main.c) file (or use a "unity build"), there are benefits to be had if you make all your functions static.
I am kind of confused why this (allegedly) isn't optimized by default, since it's not probable that you will include main.c into another file where you will use one of its functions.
I would like to know the benefits and dangers of doing this before implementing it.
Example:
main.c
static int my_func(void){ /*stuff*/ }
int main(void) {
my_func();
return 0;
}
You have various chunks of wisdom in the comments, assembled here into a Community Wiki answer.
Jonathan Leffler noted:
The primary benefit of static functions is that the compiler can (and will) aggressively inline them when it knows there is no other code that can call the function. I've had error messages from four levels of inlined function calls (three qualifying “inlined from” lines) on occasion. It's staggering what a compiler will do!
and:
FWIW: my rule of thumb is that every function should be static until it is known that it will be called from code in another file. When it is known that it will be used elsewhere, it should be declared in a header file that is included both where the function is defined and where it is used. (Similar rules apply to file scope variables — aka 'global variables'; they should be static until there's a proven need for them elsewhere, and then they should be declared in a header too.)
The main() function is always called from the startup code, so it is never static. Any function defined in the same file as an unconditionally compiled main() function cannot be reused by other programs. (Library code might contain a conditionally compiled test program for the library function(s) defined in the source file — most of my library code has #ifdef TEST / …test program… / #endif at the end.)
Eirc Postpischil generalized on that:
General rule: Anytime you can write code that says the use of something is limited, do it. Value will not be modified? Make it const. Name only needs to be used in a certain section? Declare it in the innermost enclosing scope. Name does not need to be linked externally? Make it static. Every limitation both shrinks the window for a bug to be created and may remove complications that interfere with optimization.
In C, we can force the linker to put a specific function in a specific section from the source code, using something like the following example.
Here, the function my_function is tagged with the preprocessor macro PUT_IN_USER_SECTION in order to tell the linker to put it in the section .user_section.
#define PUT_IN_USER_SECTION __attribute__((__section__(".user_section"))) __attribute__ ((noinline))
double PUT_IN_USER_SECTION my_function(double a, double b)
{
// Function content
}
Now, what I'd like to know is, when we use standard functions (for example log functions from the math.h library) from the GLIBC, MUSL, ... and we perform static linking: is it possible to put those functions in specific sections? and how to that?
There is a technique called overlaying that comes from a time when memory resources were limited. It is still used in Embedded Systems with restricted resources :
https://en.wikipedia.org/wiki/Overlay_(programming)
source : https://de.wikipedia.org/wiki/Overlay_(Programmierung)#/media/Datei:Overlay_Programming.svg
In https://developer.arm.com/documentation/100066/0611/mapping-code-and-data-to-target-memory/manual-overlay-support/writing-an-overlay-manager-for-manually-placed-overlays is an implementation of overlaying, the load addresses are in the scatter file.
https://sourceware.org/gdb/onlinedocs/gdb/How-Overlays-Work.html
For the addresses of functions in memory see https://ftp.gnu.org/pub/old-gnu/Manuals/ld-2.9.1/html_node/ld_22.html
https://www.keil.com/support/man/docs/armlink/armlink_pge1362066004071.htm
In my case I am writing a simple plugin system in C using dlfcn.h (linux). The plugins are compiled separately from the main program and result in a bunch of .so files.
There are certain functions that must be defined in the plugin in order for the the plugin to be called properly by the main program. Ideally I would like each plugin to have included in it a .h file or something that somehow states what functions a valid plugin must have, if these functions are not defined in the plugin I would like the plugin to fail compilation.
I don't think you can enforce that a function be defined at compile time. However, if you use gcc toolchain, you can use the --undefined flag when linking to enforce that a symbol be defined.
ld --undefined foo
will treat foo as though it is an undefined symbol that must be defined for the linker to succeed.
You cannot do that.
It's common practice, to only define two exported functions in a library opened by dlopen(), one to import functions in your plugin and one to export functions of your plugin.
A few lines of code are better than any explanation:
struct plugin_import {
void (*draw)(float);
void (*update)(float);
};
struct plugin_export {
int (*get_version)(void);
void (*set_version)(int);
};
extern void import(struct plugin_import *);
extern void export(struct plugin_export *);
int setup(void)
{
struct plugin_export out = {0};
struct plugin_import in;
/* give the plugin our function pointers */
in.draw = &draw, in.update = &update;
import(&in);
/* get our functions out of the plugin */
export(&out);
/* verify that all functions are defined */
if (out.get_version == NULL || out.set_version == NULL)
return 1;
return 0;
}
This is very similar to the system Quake 2 used. You can look at the source here.
With the only difference, Quake 2 only exported a single function, which im- and exports the functions defined by the dynamic library at once.
Well after doing some research and asking a few people that I know of on IRC I have found the following solution:
Since I am using gcc I am able to use a linker script.
linker.script:
ASSERT(DEFINED(funcA), "must define funcA" ) ;
ASSERT(DEFINED(funcB), "must define funcB" ) ;
If either of those functions are not defined, then a custom error message will be output when the program tries to link.
(more info on linker script syntax can be found here: http://www.math.utah.edu/docs/info/ld_3.html)
When compiling simply add the linker script file after the source file:
gcc -o test main.c linker.script
Another possibility:
Something that I didn't think of (seems a bit obvious now) that was brought to my attention is you can create small program that loads your plugin and checks to see that you have valid function pointers to all of the functions that you want your plugin to have. Then incorporate this into your build system, be it a makefile or a script or whatever. This has the benefit that you are no longer limited to using a particular compiler to make this work. As well as you can do some more sophisticated checks for other other things. The only downside being you have a little more work to do to get it set up.
I am writing a C (shared) library. It started out as a single translation unit, in which I could define a couple of static global variables, to be hidden from external modules.
Now that the library has grown, I want to break the module into a couple of smaller source files. The problem is that now I have two options for the mentioned globals:
Have private copies at each source file and somehow sync their values via function calls - this will get very ugly very fast.
Remove the static definition, so the variables are shared across all translation units using extern - but now application code that is linked against the library can access these globals, if the required declaration is made there.
So, is there a neat way for making private global variable shared across multiple, specific translation units?
You want the visibility attribute extension of GCC.
Practically, something like:
#define MODULE_VISIBILITY __attribute__ ((visibility ("hidden")))
#define PUBLIC_VISIBILITY __attribute__ ((visibility ("default")))
(You probably want to #ifdef the above macros, using some configuration tricks à la autoconfand other autotools; on other systems you would just have empty definitions like #define PUBLIC_VISIBILITY /*empty*/ etc...)
Then, declare a variable:
int module_var MODULE_VISIBILITY;
or a function
void module_function (int) MODULE_VISIBILITY;
Then you can use module_var or call module_function inside your shared library, but not outside.
See also the -fvisibility code generation option of GCC.
BTW, you could also compile your whole library with -Dsomeglobal=alongname3419a6 and use someglobal as usual; to really find it your user would need to pass the same preprocessor definition to the compiler, and you can make the name alongname3419a6 random and improbable enough to make the collision improbable.
PS. This visibility is specific to GCC (and probably to ELF shared libraries such as those on Linux). It won't work without GCC or outside of shared libraries.... so is quite Linux specific (even if some few other systems, perhaps Solaris with GCC, have it). Probably some other compilers (clang from LLVM) might support also that on Linux for shared libraries (not static ones). Actually, the real hiding (to the several compilation units of a single shared library) is done mostly by the linker (because the ELF shared libraries permit that).
The easiest ("old-school") solution is to simply not declare the variable in the intended public header.
Split your libraries header into "header.h" and "header-internal.h", and declare internal stuff in the latter one.
Of course, you should also take care to protect your library-global variable's name so that it doesn't collide with user code; presumably you already have a prefix that you use for the functions for this purpose.
You can also wrap the variable(s) in a struct, to make it cleaner since then only one actual symbol is globally visible.
You can obfuscate things with disguised structs, if you really want to hide the information as best as possible. e.g. in a header file,
struct data_s {
void *v;
};
And somewhere in your source:
struct data_s data;
struct gbs {
// declare all your globals here
} gbss;
and then:
data.v = &gbss;
You can then access all the globals via: ((struct gbs *)data.v)->
I know that this will not be what you literally intended, but you can leave the global variables static and divide them into multiple source files.
Copy the functions that write to the corresponding static variable in the same source file also declared static.
Declare functions that read the static variable so that external source files of the same module can read it's value.
In a way making it less global. If possible, best logic for breaking big files into smaller ones, is to make that decision based on the data.
If it is not possible to do it this way than you can bump all the global variables into one source file as static and access them from the other source files of the module by functions, making it official so if someone is manipulating your global variables at least you know how.
But then it probably is better to use #unwind's method.
I'm a beginner to C, but I've had a bit of experience with some other programing languages like Ruby and Python. I would very much like to create some of my own functions in C that I could use in any of my programs that just make life easier, however I'm a little bit confused about how to do this.
From what I understand the first part of this process is to create a header file that contains all of your prototypes, and I understand that, however from what I understand it is frowned upon to include anything other than declarations in your header files, so would you also need to create a .c file that contained the actual code and then #include that in all your programs along with the header file? But if so, why would you need a header file in the first place, since defining a function also declares it?
Finally, what should you put in the main() function of your header file? Do you just leave it blank, or do you not include it?
Thanks!
The declaration of a function lets the compiler know that at link time such a function will be available. The definition of the function provides that implementation, and additionally it also serves as the declaration. There is no harm in having multiple declarations, but only one implementation can be provided. Also, at least one declaration (or the only implementation) must come before any use of the function - this alone makes forward declarations necessary in cases where two functions call one another (both cannot be before the other).
So, if you have the implementation:
int foo(int a, int b) {
return a * b;
}
The corresponding declaration is simply:
int foo(int a, int b);
(The argument names do not matter in the declaration, i.e., they can be omitted or different than in the implementation. In fact you could declare only int foo(); and it would work for the above function, but this is mainly a legacy thing and not recommended. Note that to declare a function that takes no arguments, put void in the argument list, e.g., int bar(void);)
There are a number of reasons why you would want to have separate headers with only the declaration:
The implementation may be in a separate file, which allows for organisation of code into manageable pieces, and may be compiled by itself and need not be recompiled unless that file has changed - in large projects where the total compilation time can be an hour it would be absurd to re-compile everything for a small change.
The implementation source may not be available, e.g., in case of a closed-source proprietary library.
The implementation may be in a different language with a compatible calling convention.
For practical details on how to write code in multiple files and how to use libraries, please consult a book or tutorial on C programming. As for main, you need not declare it in a header unless you are specifically calling main from another function - the convention of C programs is to call main as int main(int, char**) at start of the execution.
When compiling, each .c-file (or .cpp-file) will be compiled to an own binary first.
If one binary file is using functions from another,
it just knows "there is something outside named xyz" at that time.
Then the linker will put them together in one file and rewrite the parts of each file
which are using functions of other files,
so that they actually know where to find the used functions.
What will happen if you put code in a .h file:
At compilation time, each included h-file in a c-file will be integrated in the c-file.
If you have code for xyz in a h-file and you´re including it in more thana one c-file,
each of this compiled c-files will have a xyz. Then, the linker will be confused...
So, function code have to be in a own c file.
Why use a h-file at all?
Because, if you call xyz in your code, how should the compiler know
if this is a function of another c-file (and which parameters...)
or an error because xyz does not exist?
The reason for header files in c are for when you need the same code in multiple scripts. So if you are just repeated the same code in one script then yes it would be easier to just use a function. Also for header files, yes you would need to include a .c file for all the computations.