Program organization with device drivers in C - c

Say I have two device drivers and I want them to share the same interface so a caller doesn't know which driver it is talking to exactly. How would I organize this in C? I have thought of a couple of ways:
First: Create a pair of .c/.h files for both drivers with the same interface and create a switch in the caller:
//main.c:
#ifdef USING_DRIVER_1
#include "driver_1.h"
#else
#include "driver_2.h"
#endif // USING_DRIVER_1
Second: Use a single header and create a file-long switch in the drivers' source file like so:
//driver_1.c:
#ifdef USING_DRIVER_1
#include "driver.h"
bool func(uint32_t var)
{
foo(var);
}
#endif // USING_DRIVER_1
//driver_2.c:
#ifndef USING_DRIVER_1
#include "driver.h"
bool func(uint32_t var)
{
bar(var);
}
#endif // !USING_DRIVER_1
Third: This one is a lot like the second one but instead of using switch statements in files themselves, a specific driver is chosen in the makefile or IDE equivalent:
#makefile:
SRC = main.c
#SRC += driver_1.c
SRC += driver_2.c
I'm sure one of these is superior to others and there are probably some I haven't thought of. How is it done in practice?
EDIT:
Details about my particular system: my target is an ARM microcontroller and my dev. environment is an IDE. Device drivers are for two different revisions and will never be used at the same time so each build should contain only one version. Devices themselves are modems operating via AT commands.

All three variants are actually useful. Which to choose depends on what you actually need:
Selecting the driver from the caller would add both drivers to the code. That only makes sense if you switch drivers at run-time. Then it would be the (only) way to go. Use e.g. function pointers or two identical const structs which provide the interface (function pointer and possibly other data).
A global switch is plain ugly and not possible across functions and declarations. Better would be conditional compilation using #if .. #elif #end. That makes sense if the two drivers have only minor differences, e.g. different SPI interfaces (SPI1 vs. SPI2 ...). Then this is the way to go. With some effort in the build-tool you can even use this for case 1. (one file for two different drivers, but not my recommendation).
If both drivers are substantial different in their implementation, but have the same interface, take the third approach, but use a single header or both drivers (see below).
Note for all but the first approach, both drivers have to provide an identical interface to the application. The first approach actually allows for differences, but that would actually require the user code treat them different and that's likely not what you want.
Using a single header file for both drivers (e.g.: "spi_memory.h" and "spi_flash.c" vs. "spi_eeprom.c") does ensure the application does not see an actual difference - as long as the drivers also behave identically, of course. Minor differences can be caught by variables in the interface (e.g. extern size_t memory_size;) or functions (the better approach).

I recommend using pointers to functions. For example:
struct driver_api {
bool (*pFunc)(uint32_t);
} DriverApi;
void initializeApi(struct driver_api *pApi);
// driver1.c:
void initializeApi(struct driver_api *pApi)
{
pApi->pFunc = bar;
}
// driver2.c:
void initializeApi(struct driver_api *pApi)
{
pApi->pFunc = foo;
}
Another thing you might consider is removing the #ifndef USING_DRIVER_1 checks from your source files. Use a build system (e.g. make) and specify which source files should be included in the project. Then, based on some compile time option (such as a command line argument) include driver1.c or driver2.c, but never both.
The "advantage" of the pointers is that you can compile both APIs and then decide at runtime (even changing it mid run, for whatever reason).

Related

header file to binary file or intel HEX file

I have a header file. It contains the value of various constants(as pre-processor macro definitions---hash defines) that I use in my embedded systems project(IDE is Keil MicroVision, Microcontroller is STM32 and programming language is C).
I want to convert that header file to a hex file, so that I can separately write it to flash memory of the microcontroller.
Hence, for a single general software code, I wish to to change the constants and run it for specific (around 10-15 different types) hardware.
My approach: I am trying to find a way to convert the header file to a binary file first, and then use some utility(srecorder from sourceforge) to convert the .bin to .hex
Is this approach right?
Or can anyone please suggest an alternate and effective way to achieve my goal?
Thanks
Preprocessor defines aren't stored in data flash but in program flash (.text), together with the code using them, integrated in the machine code. They aren't stored at any specific or fixed address. So your request doesn't make any sense - you simply cannot program them to flash separately.
In order to achieve that you need const uint32_t or similar variables stored at fixed memory locations. These need to be declared in a .c file, though it is sometimes acceptable to expose read-only variables "globally" with extern references from a corresponding .h file.
To allocate a variable at a specific address you typically need to change the linker script and use non-standard compiler extensions.
Understanding where you're coming from, I would not recommend going down the road you suggested. Let me show you two ways to solve your problem which I believe are much better.
I'll assume your constants are macros, but, it should not be different if they are enums or something else.
First option is to use the preprocessor to select your platform. Something like:
#if defined(PLATFORM_A)
#define CONSTANT_X 111
#define CONSTANT_Y 222
#elif defined(PLATFORM_B)
#define CONSTANT_X 112
#define CONSTANT_Y 221
#else
#error Need to define a platform
#endif
Of course, if you have a lot of constants, maybe keeping each platform in its own header would be better (easier to read).
#if defined(PLATFORM_A)
#include "platform_a_constants.h"
#elif defined(PLATFORM_B)
#include "platform_b_constants.h"
#else
#error Need to define a platform
#endif
This option is the only way if you use these constants to, say, dimension an array.
I don't have experience with Keil IDE's, so, I'm not sure how to configure it to build different binaries with different macro definitions, but, it's a common C thing, there should be way to do it.
Another option would be to make these globals. In that case, you would have a single header like:
extern int const constant_x;
extern int const constant_y;
And then a separate source file for each platform:
// platform_a.c
int const constant_x = 111;
int const constant_y = 222;
// platform_b.c
int const constant_x = 112;
int const constant_y = 221;
Again, how to link in only the source file that you need for a particular target depends on your build system and I'm not familiar with Keil IDE, but it should not be hard to do.
In general, this option is a little easier to maintain, as it's just setting up the linker, you don't mess up with source code in any way (and defining a preprocessor constant is messing up with the source code in potentially ugly ways).
Also, with this option it's simple to turn a constant into a runtime parameter, which can be changed during the execution of the program, if need be - just drop the const from the declaration and definitions.

Why do we need feature test macros?

By reading What does -D_XOPEN_SOURCE do/mean? , I understand that how to use feature test macros.
But I still don't understand why do we need it, I mean, can we just enable all features available? Then the doc writes like this: this function only available in Mac/BSD, that function only available in Linux, if you use it, then your program can only be running on that system.
So why do we need a feature test macro in the first place?
why do we need it, I mean, can we just enable all features available?
Imagine some company has written perfectly fine super portable code roughly like the following:
#include <stdlib.h>
struct someone_s { char name[20]; };
/// #brief grants Plant To someone
int grantpt(int plant_no, struct someone_s someone) {
// some super plant granting algorithm here
return 0;
}
int main() {
// some program here
struct someone_s kamil = { "Kamil" };
return grantpt(20, kamil);
}
That program is completely fine and all is working fine, and that program is very C compatible, thus should be portable to anywhere. Now imagine for a moment that _XOPEN_SOURCE does not exist! A customer receives sources of that program and tries to compile and run it on his bleeding edge Unix computer with certified C compiler on a certified POSIX system, and he receives an error that that company has to fix, and in turn has to pay for:
/tmp/1.c:7:9: error: conflicting types for ‘grantpt’; have ‘int(struct someone_s, int)’
7 | int grantpt(struct someone_s someone, int plant_no) {
| ^~~~~~~
In file included from /tmp/1.c:2:
/usr/include/stdlib.h:977:12: note: previous declaration of ‘grantpt’ with type ‘int(int)’
977 | extern int grantpt (int __fd) __THROW;
| ^~~~~~~
Looks like a completely random name picked for a function is already taken in POSIX - grantpt().
When introducing new symbols that are not in reserved space, standards like POSIX can't just "add them" and expect the world not to protest - conflicting definitions can and will and do break valid programs. To battle the issue feature_test_macros were introduced. When a program does #define _XOPEN_SOURCE 500 it means that it is prepared for the POSIX standard and there are no conflicts between the code and symbols introduced by POSIX in that version.
Feature test macros are not just "my program wants to use these functions", it is most importantly "my program has no conflicts with these functions", which is way more important, so that existing programs continue to run.
The theoretical reason why we have feature selection macros in C, is to get the C library out of your way. Suppose, hypothetically, you want to use the name getline for a function in your program. The C standard says you can do that. But some operating systems provide a C library function called getline, as an extension. Its declaration will probably clash with your definition. With feature selection macros, you can, in principle, tell those OSes' stdio.hes not to declare their getline so you can use yours.
In practice these macros are too coarse grained to be useful, and the only ones that get used are the ones that mean "give me everything you got", and people do exactly what you speculate they could do, in the documentation.
Newer programming languages (Ada, C++, Modula-2, etc.) have a concept of "modules" (sometimes also called "namespaces") which allow the programmer to give an exact list of what they want from the runtime library; this works much better.
Why do we need feature test macros?
You use feature test macros to determine if the implementation supports certain features or if you need to select an alternative way to implement whatever it is you're implementing.
One example is the set of *_s functions, like strcpy_s:
errno_t strcpy_s(char *restrict dest, rsize_t destsz, const char *restrict src);
// put this first to signal that you actually want the LIB_EXT1 functions
#define __STDC_WANT_LIB_EXT1__ 1
#include <string.h>
Then in your code:
#ifdef __STDC_LIB_EXT1__
errno_t err = strcpy_s(...); // The implementation supports it so you can use it
#else
// Not supported, use an alternative, like your own implementation
// or let it fail to compile.
#fi
can we just enable all features available?
When it comes to why you need to tell the implementation that you actually want a certain set of features (instead of it just including them all automatically) I have no better answer than that it could possibly make the programs slower to compile and could possibly also make it produce bigger executables than necessary.
Similarly, the implementation does not link with every library it has available, but only the most basic ones. You have to tell it what you need.
In theory, you could create header file which defines all the possible macros that you've found that will enable a certain set of features.
#define _XOPEN_SOURCE 700
#define __STDC_LIB_EXT1__ 1
...
But as you see with _XOPEN_SOURCE, there are different releases, and you can't enable them all at the same time, you need to select one.

'Reverse' a collection of C preprocessor macros easily

I have a lot of preprocessor macro definitions, like this:
#define FOO 1
#define BAR 2
#define BAZ 3
In the real application, each definition corresponds to an instruction in an interpreter virtual machine. The macros are also not sequential in numbering to leave space for future instructions; there may be a #define FOO 41, then the next one is #define BAR 64.
I'm now working on a debugger for this virtual machine, and need to effectively 'reverse' these preprecessor macros. In other words, I need a function which takes the number and returns the macro name, e.g. an input of 2 returns "BAR".
Of course, I could create a function using a switch myself:
const char* instruction_by_id(int id) {
switch (id) {
case FOO:
return "FOO";
case BAR:
return "BAR";
case BAZ:
return "BAZ";
default:
return "???";
}
}
However, this will a nightmare to maintain, since renaming, removing or adding instructions will require this function to be modified too.
Is there another macro which I can use to create a function like this for me, or is there some other approach? If not, is it possible to create a macro to perform this task?
I'm using gcc 6.3 on Windows 10.
You have the wrong approach. Read SICP if you have not read it.
I have a lot of preprocessor macro definitions, like this:
#define FOO 1
#define BAR 2
#define BAZ 3
Remember that C or C++ code can be generated, and it is quite easy to instruct your build automation tool to generate some particular C file (with GNU make or ninja you just add some rule or recipe).
For example, you could use some different preprocessor (liek GPP or m4), or some script -e.g. in awk or Python or Guile, etc..., or write your own program (in C, C++, Ocaml, etc...), to generate the header file containing these #define-s. And another script or program (or the same one, invoked differently) could generate the C code of instruction_by_id
Such basic metaprogramming techniques (of generating some or several C files from something higher level but specific) have been used since at least the 1980s (e.g. with yacc or RPCGEN). The C preprocessor facilitates that with its #include directive (since you can even include lines inside some function body, etc...). Actually, the idea that code is data (and proof) and data is code is even older (Church-Turing thesis, Curry-Howard correspondence, Halting problem). The Gödel, Escher, Bach book is very entertaining....
For example, you could decide to have a textual file opcodes.txt (or even some sqlite database containing stuff....) like
# ignore lines starting with an hashsign
FOO 1
BAR 2
and have two small awk or Python scripts (or two tiny C specialized programs), one generating the #define-s (into opcode-defines.h) and another generating the body of instruction_by_id (into opcode-instr.inc). Then you need to adapt your Makefile to generate these, and put #include "opcode-defines.h" inside some global header, and have
const char* instruction_by_id(int id) {
switch (id) {
#include "opcode-instr.inc"
default: return "???";
}
}
this will a nightmare to maintain,
Not so with such a metaprogramming approach. You'll just maintain opcodes.txt and the scripts using it, but you express a given "knowledge element" (the relation of FOO to 1) only once (in a single line of opcode.txt). Of course you need to document that (at the very least, with comments in your Makefile).
Metaprogramming from some higher-level, declarative formalization, is a very powerful paradigm. In France, J.Pitrat pioneered it (and he is writing an interesting blog today, while being retired) since the 1960s. In the US, J.MacCarthy and the Lisp community also.
For an entertaining talk, see Liam Proven FOSDEM 2018 talk on The circuit less traveled
Large software are using that metaprogramming approach quite often. For example, the GCC compiler have about a dozen of C++ code generators (in total, they are emitting more than a million of C++ lines).
Another way of looking at such an approach is the idea of domain-specific languages that could be compiled to C. If you use an operating system providing dynamic loading, you can even write a program emitting C code, forking a process to compile it into some plugin, then loading that plugin (on POSIX or Linux, with dlopen). Interestingly, computers are now fast enough to enable such an approach in an interactive application (in some sort of REPL): you can emit a C file of a few thousand lines, compile it into some .so shared object file, and dlopen that, in a fraction of second. You could also use JIT-compiling libraries like GCCJIT or LLVM to generate code at runtime. You could embed an interpreter (like Lua or Guile) into your program.
BTW, metaprogramming approaches is one of the reasons why basic compilation techniques should be known by most developers (and not only just people in the compiler business); another reason is that parsing problems are very common. So read the Dragon Book.
Be aware of Greenspun's tenth rule. It is much more than a joke, actually a profound truth about large software.
In a similar case I've resorted to defining a text file format that defines the instructions, and writing a program to read this file and write out the C source of the actual instruction definitions and the C source of functions like your instruction_by_id(). This way you only need to maintain the text file.
As awesome as general code generation is, I’m surprised that nobody mentioned that (if you relax your problem definition just a bit) the C preprocessor is perfectly capable of generating the necessary code, using a technique called X macros. In fact every simple bytecode VM in C that I’ve seen uses this approach.
The technique works as follows. First, there is a file (call it insns.h) containing the authoritative list of instructions,
INSN(FOO, 1)
INSN(BAR, 2)
INSN(BAZ, 3)
or alternatively a macro in some other header containing the same,
#define INSNS \
INSN(FOO, 1) \
INSN(BAR, 2) \
INSN(BAZ, 3)
whichever is more conveinent for you. (I’ll use the first option in the following.) Note that INSN is not defined anywhere. (Traditionally it would be called X, thus the name of the technique.) Wherever you want to loop over your instructions, define INSN to generate the code you want, include insns.h, then undefine INSN again.
In your disassembler, write
const char *instruction_by_id(int id) {
switch (id) {
#define INSN(NAME, VALUE) \
case NAME: return #NAME;
#include "insns.h" /* or just INSNS if you use a macro */
#undef INSN
default: return "???";
}
}
using the prefix stringification operator # to turn names-as-identifiers into names-as-string-literals.
You obviously can’t define the constants this way, because macros cannot define other macros in the C preprocessor. However, if you don’t insist that the instruction constants be preprocessor constants, there’s a different perfectly serviceable constant facility in the C language: enumerations. Whether or not you use an enumerated type, the enumerators defined inside it are regular integer constants from the point of view of the compiler (though not the preprocessor—you cannot use #ifdef with them, for example). So, using an anonymous enumeration type, define your constants like this:
enum {
#define INSN(NAME, VALUE) \
NAME = VALUE,
#include "insns.h" /* or just INSNS if you use a macro */
#undef INSN
NINSNS /* C89 doesn’t allow trailing commas in enumerations (but C99+ does), and you may find this constant useful in any case */
};
If you want to statically initialize an array indexed by your bytecodes, you’ll have to use C99 designated initializers {[FOO] = foovalue, [BAR] = barvalue, /* ... */} whether or not you use X macros. However, if you don’t insist on assigning custom codes to your instructions, you can eliminate VALUE from the above and have the enumeration assign consecutive codes automatically, and then the array can be simply initialized in order, {foovalue, barvalue, /* ... */}. As a bonus, NINSNS above then becomes equal to the number of the instructions and the size of any such array, which is why I called it that.
There are more tricks you can use here. For example, if some instructions have variants for several data types, the instruction list X macro can call the type list X macro to generate the variants automatically. (The somewhat ugly second option of storing the X macro list in a large macro and not an include file may be more handy here.) The INSN macro may take additional arguments such as the mode name, which would ignored in the code list but used to call the appropriate decoding routine in the disassembler. You can use token pasting operator ## to add prefixes to the names of the constants, as in INSN_ ## NAME to generate INSN_FOO, INSN_BAR, etc. And so on.

Removing functions included from a header from scope of the next files

In my project we are heavily using a C header which provides an API to comunicate to an external software. Long story short, in our project's bugs show up more often on the calling of the functions defined in those headers (it is an old and ugly legacy code).
I would like to implement an indirection on the calling of those functions, so I could include some profiling before calling the actual implementation.
Because I'm not the only person working on this project, I would like to make those wrappers in a such way that if someone uses the original implementations directly it should cause a compile error.
If those headers were C++ sources, I would be able to simply make a namespace, wrap the included files in it, and implement my functions using it (the other developers would be able to use the original implementation using the :: operator, but just not being able to call it directly is enough encapsulation to me). However the headers are C sources (which I have to include with extern "C" directive to include), so namespaces won't help me AFAIK.
I tried to play around with defines, but with no luck, like this:
#define my_func api_func
#define api_func NULL
What I wanted with the above code is to make my_func to be translated to api_func during the preprocessing, while making a direct call to api_func give a compile error, but that won't work because it will actually make my_func to be translated to NULL too.
So, basically, I would like to make a wrapper, and make sure the only way to access the API is through this wrapper (unless the other developers make some workaround, but this is inevitable).
Please note that I need to wrap hundreds of functions, which show up spread in the whole code several times.
My wrapper necessarily will have to include those C headers, but I would like to make them leave scope outside the file of my wrapper, and make them to be unavailable to every other file who includes my wrapper, but I guess this is not possible in C/C++.
You have several options, none of them wonderful.
if you have the sources of the legacy software, so that you can recompile it, you can just change the names of the API functions to make room for the wrapper functions. If you additionally make the original functions static and put the wrappers in the same source files, then you can ensure that the originals are called only via the wrappers. Example:
static int api_func_real(int arg);
int api_func(int arg) {
// ... instrumentation ...
int result = api_func_real(arg);
// ... instrumentation ...
return result;
}
static int api_func_real(int arg) {
// ...
}
The preprocessor can help you with that, but I hesitate to recommend specifics without any details to work with.
if you do not have sources for the legacy software, or if otherwise you are unwilling to modify it, then you need to make all the callers call your wrappers instead of the original functions. In this case you can modify the headers or include an additional header before that uses #define to change each of the original function names. That header must not be included in the source files containing the API function implementations, nor in those providing the wrapper function implementations. Each define would be of the form:
#define api_func api_func_wrapper
You would then implement the various api_func_wrapper() functions.
Among the ways those cases differ is that if you change the legacy function names, then internal calls among those functions will go through the wrappers bearing the original names (unless you change the calls, too), but if you implement wrappers with new names then they will be used only when called explicitly, which will not happen for internal calls within the legacy code (unless, again, you modify those calls).
You can do something like
[your wrapper's include file]
int origFunc1 (int x);
int origFunc2 (int x, int y);
#ifndef WRAPPER_IMPL
#define origFunc1 wrappedFunc1
#define origFunc2 wrappedFunc2
#else
int wrappedFunc1(int x);
int wrappedFunc2(int x, int y);
#endif
[your wrapper implementation]
#define WRAPPER_IMPL
#include "wrapper.h"
int wrapperFunc1 (...) {
printf("Wrapper1 called\n");
origFunc1(...);
}
Your wrapper's C file obviously needs to #define WRAPPER_IMPL before including the header.
That is neither nice nor clean (and if someone wants to cheat, he could simply define WRAPPER_IMPL), but at least some way to go.
There are two ways to wrap or override C functions in Linux:
Using LD_PRELOAD:
There is a shell environment variable in Linux called LD_PRELOAD,
which can be set to a path of a shared library,
and that library will be loaded before any other library (including glibc).
Using ‘ld --wrap=symbol‘:
This can be used to use a wrapper function for symbol.
Any further reference to symbol will be resolved to the wrapper function.
a complete writeup can be found at:
http://samanbarghi.com/blog/2014/09/05/how-to-wrap-a-system-call-libc-function-in-linux/

How can I share an array between the C files of a library, without the array being visible to the outside? [duplicate]

I am writing a C (shared) library. It started out as a single translation unit, in which I could define a couple of static global variables, to be hidden from external modules.
Now that the library has grown, I want to break the module into a couple of smaller source files. The problem is that now I have two options for the mentioned globals:
Have private copies at each source file and somehow sync their values via function calls - this will get very ugly very fast.
Remove the static definition, so the variables are shared across all translation units using extern - but now application code that is linked against the library can access these globals, if the required declaration is made there.
So, is there a neat way for making private global variable shared across multiple, specific translation units?
You want the visibility attribute extension of GCC.
Practically, something like:
#define MODULE_VISIBILITY __attribute__ ((visibility ("hidden")))
#define PUBLIC_VISIBILITY __attribute__ ((visibility ("default")))
(You probably want to #ifdef the above macros, using some configuration tricks à la autoconfand other autotools; on other systems you would just have empty definitions like #define PUBLIC_VISIBILITY /*empty*/ etc...)
Then, declare a variable:
int module_var MODULE_VISIBILITY;
or a function
void module_function (int) MODULE_VISIBILITY;
Then you can use module_var or call module_function inside your shared library, but not outside.
See also the -fvisibility code generation option of GCC.
BTW, you could also compile your whole library with -Dsomeglobal=alongname3419a6 and use someglobal as usual; to really find it your user would need to pass the same preprocessor definition to the compiler, and you can make the name alongname3419a6 random and improbable enough to make the collision improbable.
PS. This visibility is specific to GCC (and probably to ELF shared libraries such as those on Linux). It won't work without GCC or outside of shared libraries.... so is quite Linux specific (even if some few other systems, perhaps Solaris with GCC, have it). Probably some other compilers (clang from LLVM) might support also that on Linux for shared libraries (not static ones). Actually, the real hiding (to the several compilation units of a single shared library) is done mostly by the linker (because the ELF shared libraries permit that).
The easiest ("old-school") solution is to simply not declare the variable in the intended public header.
Split your libraries header into "header.h" and "header-internal.h", and declare internal stuff in the latter one.
Of course, you should also take care to protect your library-global variable's name so that it doesn't collide with user code; presumably you already have a prefix that you use for the functions for this purpose.
You can also wrap the variable(s) in a struct, to make it cleaner since then only one actual symbol is globally visible.
You can obfuscate things with disguised structs, if you really want to hide the information as best as possible. e.g. in a header file,
struct data_s {
void *v;
};
And somewhere in your source:
struct data_s data;
struct gbs {
// declare all your globals here
} gbss;
and then:
data.v = &gbss;
You can then access all the globals via: ((struct gbs *)data.v)->
I know that this will not be what you literally intended, but you can leave the global variables static and divide them into multiple source files.
Copy the functions that write to the corresponding static variable in the same source file also declared static.
Declare functions that read the static variable so that external source files of the same module can read it's value.
In a way making it less global. If possible, best logic for breaking big files into smaller ones, is to make that decision based on the data.
If it is not possible to do it this way than you can bump all the global variables into one source file as static and access them from the other source files of the module by functions, making it official so if someone is manipulating your global variables at least you know how.
But then it probably is better to use #unwind's method.

Resources