I'm a C++ developer and when it comes to testing, it's easy to test a class by injecting dependencies, overriding member functions, and so on, so that you can test edge cases easily. However, in C, you can't use those wonderful features. I'm finding it hard to add unit tests to code because of some of the 'standard' ways that C code is written. What are the best ways to tackle the following:
Passing around a large 'context' struct pointer:
void some_func( global_context_t *ctx, .... )
{
/* lots of code, depending on the state of context */
}
No easy way to test failure on dependent functions:
void some_func( .... )
{
if (!get_network_state() && !some_other_func()) {
do_something_func();
....
}
...
}
Functions with lots of parameters:
void some_func( global_context_t *, int i, int j, other_struct_t *t, out_param_t **out, ...)
{
/* hundreds and hundreds of lines of code */
}
Static or hidden functions:
static void foo( ... )
{
/* some code */
}
void some_public_func( ... }
{
/* call static functions */
foo( ... );
}
In general, I agree with Wes's answer - it is going to be much harder to add tests to code that isn't written with tests in mind. There's nothing inherent in C that makes it impossible to test - but, because C doesn't force you to write in a particular style, it's also very easy to write C code that is difficult to test.
In my opinion, writing code with tests in mind will encourage shorter functions, with few arguments, which helps alleviate some of the pain in your examples.
First, you'll need to pick a unit testing framework. There are a lot of examples in this question (though sadly a lot of the answers are C++ frameworks - I would advise against using C++ to test C).
I personally use TestDept, because it is simple to use, lightweight, and allows stubbing. However, I don't think it is very widely used yet. If you're looking for a more popular framework, many people recommend Check - which is great if you use automake.
Here are some specific answers for your use cases:
Passing around a large 'context' struct pointer
For this case, you can build an instance of the struct with the pre conditions manually set, then check the status of the struct after the function has run. With short functions, each test will be fairly straightforward.
No easy way to test failure on dependent functions
I think this is one of the biggest hurdles with unit testing C.
I've had success using TestDept, which allows run time stubbing of dependent functions. This is great for breaking up tightly coupled code. Here's an example from their documentation:
void test_stringify_cannot_malloc_returns_sane_result() {
replace_function(&malloc, &always_failing_malloc);
char *h = stringify('h');
assert_string_equals("cannot_stringify", h);
}
Depending on your target environment, this may or may not work for you. See their documentation for more details.
Functions with lots of parameters
This probably isn't the answer you're looking for, but I would just break these up into smaller functions with fewer parameters. Much much easier to test.
Static or hidden functions
It's not super clean, but I have tested static functions by including the source file directly, enabling calls of static functions. Combined with TestDept for stubbing out anything not under test, this works fairly well.
#include "implementation.c"
/* Now I can call foo(), defined static in implementation.c */
A lot of C code is legacy code with few tests - and in those cases, it is generally easier to add integration tests that test large parts of the code first, rather than finely grained unit tests. This allows you to start refactoring the code underneath the integration test to a unit-testable state - though it may or may not be worth the investment, depending on your situation. Of course, you'll want to be able to add unit tests to any new code written during this period, so having a solid framework up and running early is a good idea.
If you are working with legacy code, this book
(Working effectively with legacy code by Michael Feathers) is great further reading.
That was a very good question designed to lure people into believing that C++ is better than C because it's more testable. However, it's hardly that simple.
Having written lots of testable C++ and C code both, and an equally impressive amount of untestable C++ and C code, I can confidentially say you can wrap crappy untestable code in both languages. In fact the majority of the issues you present above are equally as problematic in C++. EG, lots of people write non-object encapsulated functions in C++ and use them inside classes (see the extensive use of C++ static functions within classes, as an example, such as MyAscii::fromUtf8() type functions).
And I'm quite sure that you've seen a gazillion C++ class functions with too many parameters. And if you think that just because a function only has one parameter it's better, consider the case that internally it's frequently masking the passed in parameters by using a bunch of member variables. Let alone "static or hidden" functions (hint, remember that "private:" keyword) being just as big of a problem.
So, the real answer to your question isn't "C is worse for exactly the reasons you state" but rather "you need to architect it properly in C, just as you would in C++". For example, if you have dependent functions, then put them in a different file and return the number of possible answers they might provide by implementing a bogus version of that function when testing the super-function. And that's the barely-getting-by change. Don't make static or hidden functions if you want to test them.
The real problem is that you seem to state in your question that you're writing tests for someone else's library that you didn't write and architect for proper testability. However, there are a ton of C++ libraries that exhibit the exact same symptoms and if you were handed one of them to test, you'd be just as equally annoyed.
The solution to all problems like this is always the same: write the code properly and don't use someone else's improperly written code.
When unit testing C you normally include the .c file in the test so you can first test the static functions before you test the public ones.
If you have complex functions and you want to test code calling them then it is possible to work with mock objects. Take a look at the cmocka unit testing framework which offers support for mock objects.
Related
I create a project for a microcontroller by programming it in C language. Due to its specificity (a microcontroller with a built-in BLE core), I have to use the SDK and a specific project template. How can I test my modules when they have numerous references to other files (modules) in the SDK? (References are needed to use functions to, for example, send data via BLE) Do I have to somehow mock each of the SDK functions? I am using Unity test framework.
Module example:
my_module.c
#include "sdk_module_1.h"
#include "my_module.h"
void init_hardware(void)
{
//function code
}
bool send_data(int data)
{
//prepare data eq.
data++
//send data using SDK function (sdk_module_1.h)
return send_data(data);
}
my_module.h
void init_hardware(void)
void send_data(int data)
my_module_test.c
#include "my_module.h"
#include "//unity files"
TEST_SETUP(Test)
{
}
TEST_TEAR_DOWN(Test)
{
}
TEST(Test, First_test)
{
TEST_ASSERT_EQUAL(send_data(5),true);
}
When I try to test my module, I have a problem with referencing SDK modules and their functions. How can I create tests for such software? Should I change the way my modules are written?
The resource you want is James Grenning's Test Driven Development for Embedded C.
(Note: what follows below is a translation of my ideas into C. If you find a conflict with Grenning's approach, try his first - he has a lot more laps in the embedded space than I do.)
How can I test my modules when they have numerous references to other files (modules) in the SDK?
Sometimes the answer is that you have to change your design. In other words, treating testability as a design constraint, rather than an afterthought.
The way I normally describe it is this: we want to design our code such that (a) all the complicated code is easy to test and (b) anything that's hard to test is "so simple there are obviously no deficiencies".
This often means designing our code so that collaborations between complicated code and hard to test code are configurable, allowing you to provide a substitute implementation (stub/mock/test double) when using the real thing isn't cost effective.
So instead of having A (complicated) directly invoke B (hard for testing), we might instead have A invoke B via a function pointer that, during testing, can be replaced with a pointer to a simpler function.
(In some styles, this gets reversed: the complicated logic points to an inert/stub implementation by default, and you opt in to using the complicated implementation instead.)
In other words, we replace
void A(void) {
B(); // B is the function that makes things hard to test.
}
with
void A(void) {
C(&B);
}
# It's been a long time, please forgive (or correct) the spelling here
void C( void (*fn)()) {
&fn();
}
We test A by looking at it, agreeing that it is "so simple there are obviously no deficiencies", and signing off on it. We test C by passing it pointers to substitute implementations, writing as many substitutes as we need for B to ensure that all of C's edge cases are covered.
Note that "hard to test" can cover a lot of bases - the real function is slow, the real function is unstable, the real function costs money... if it makes testing less pleasant or less effective, then it counts.
Firstly, I would highly recommend using Ceedling to manage and run your unit tests in C. It wraps Unity and CMock very nicely and makes unit testing, particularly for embedded systems, a lot easier than it otherwise would be.
With regards to unit testing when an SDK is involved, you first need to remember that the point of a unit test is to test a unit, so anything outside of that needs to be mocked or stubbed, otherwise it’s not a unit test and cannot be run as one.
So, for something like an I2C hardware abstraction module, any calls to an underlying SDK (that would ordinarily do the actual I2C transaction) need to be mocked so that you can place expectations on what should happen when the call is made instead of the real thing. In this way, the unit test of the I2C HAL module is exercising only its behaviour and how it handles its calls, and nothing else, as it should.
Therefore, all that you generally need to do to unit test with an SDK is to ensure that any module you use has its public functions mocked. You then simply include the mock in the test, rather than the real module.
Now, I find that you don’t really want to mess with the original SDK and its structure for the purpose of unit testing, and moreover you don’t want to change how the real code includes the real modules, so the problem for the test then comes when the SDK function you’re calling in your code sits behind layers of other modules in the SDK, as is often the case. Here, you don’t want to mock everything you’re not using, but you do need to match the include structure. I’ve found that a good way to do this is simply to create a support folder and copy any top-level headers that the code uses from the SDK into it. I then configure the test to include from support before it includes from the SDK.
Once you have this kind of structure in place, unit testing with an SDK is easy. Ceedling will help with all of this.
In our environment we're encountering a problem regarding mocking functions for our library unit tests.
The thing is that instead of mocking whole modules (.c files) we would like to mock single functions.
The library is compiled to an archive file and linked statically to the unit test. Without mocking there isn't any issue.
Now when trying to mock single functions of the library we would get multiple definitions obviously.
My approach now is to use the weak function attribute when compiling/linking the library so that the linker takes the mocked (non-weak) function when linking against the unit test. I already tested it and it seems to work as expected.
The downside of this is that we need many attribute declarations in the code.
My final approach would be to pass some compile or link arguments to the compiler, that every function is automatically declared as a weak symbol.
The question now is: Is there anything to do this in a nice way?
btw: We use clang 8 as a compiler.
James Grenning describes several options to solve this problem (http://blog.wingman-sw.com/linker-substitution-in-c-limitations-and-workarounds). The option "function pointer substitution" gives a high degree of freedom. It works as follows: Replace functions by pointers to functions. The function pointers are initialized to point to the original function, but each pointer can be redirected individually to a test double.
This approach allows to have one single test executable where you can still decide for each test case individually for which function you use a test double and for which you use the original function.
It certainly also comes at a price:
One indirection for each call. But, if you use link-time-optimization the optimizer will most likely eliminate that indirection again, so this may not be an issue.
You make it possible to redirect function calls also in production code. This would certainly be a misuse of the concept, however.
I would suggest using VectorCAST
https://www.vector.com/us/en/products/products-a-z/software/vectorcast/
I've used, unity/cmock and others for unit testing C in the past, but after a while its vary tedious to manually create these for a language that isnt really built around that concept and is very much a heres a Hammer and Chissel the world is yours approach.
VectorCAST abstracts majority of the manual work that is required with tools like Unity/Cmock, we can get results across a project/module sooner and quicker than we did in the past with the other tools.
Is vectorCAST expensive and very much an enterprise level tool? yes... but its defiantly worth its weight in gold. And thats coming from someone who is very old school, manual approach to software development... just text editors, terminals and commandline debuggers.
VetorCAST handles function pointers and pointers extremely well, stubbing functions is easy as two clicks away. It saved our team alot of time... allowing us to focus on results and reducing the feedback loop of development.
For a project I'm working on, there are a number of states where calculations can be relied upon to return the same results (and have no side effects). The obvious solution would be to use memoization for all the costly functions.
I would need to have memoization that handles more than one state (so that I could invalidate one cache set without invalidating another). Does anybody know a good C library for this sort of thing? (Note that it can't be C++, we're talking C.)
I've worked with some good implementations in Python that use decorators to be able to flexibly memoize a bunch of different functions. I'm kind of wondering is there's a generic library that could do similar things with C (though probably with explicit function wrapping rather than convenient syntax). I just think it would be silly to have to add caching to each function individually when it's a common enough issue there must be some off-the-shelf solutions for it.
The characteristics I would look for are the following:
Can cache functions with various types of input and output
Manages multiple different caches (so you can have short-term and long term caching)
Has good functions for invalidating caches
Intended to be used by wrapping functions, rather than altering existing functions
Anybody know a C implementation that can handle all or most of these requisites?
Okay, seeing as there were no memoization libraries for C and I was looking for a drop-in solution for memoizing existing C functions in a code base, I made my own little memoization library that I'm releasing under the APL 2.0. Hopefully people will find this useful and it won't crash and burn on other compilers. If it does have issues, message me here and I'll look into it whenever I have the time (which would probably be measured in increments of months).
This library is not built for speed, but it works and has been tested to make sure it is fairly straightforward to use and doesn't display any memory leaks in my testing. Fundamentally, this lets me add memoization to functions similar to the decorator pattern that I'm used to in Python.
The library is currently on SourceForge as the C-Memo Library. It comes with a little user manual and a couple of 3rd party permissively licensed libraries for generic hashing. If the location changes, I'll try to update this link. I found this helpful in working on my project, hopefully others will find it useful for their projects.
memoization is all but built into the haskell language. You can call this functionality from c
Update:
I'm still learning about functional programming, but I do know that memoization is fairly common in functional programming becuase the language features make it easy. I'm learning f#. I don't know haskell, but it is the only functional language I know of that will interact with c. You might be able to find another functional programming language that interfaces with c in a more suitable fashion than what haskell provides.
Why, just can't be C++?
Just for a starting point look to this memoization function:
declaration:
template<typename T, typename F>
auto Memoize(T key, F function) {
static T memory_key = key;
static auto memory = function(memory_key);
if (memory_key != key) {
memory_key = key;
memory = function(memory_key);
}
return memory;
}
Usage example:
auto index = Memoize(value, IndexByLetter);
One aspect where C shows its age is the encapsulation of code. Many modern languages has classes, namespaces, packages... a much more convenient to organize code than just a simple "include".
Since C is still the main language for many huge projects. How do you to overcome its limitations?
I suppose that one main factor should be lots of discipline. I would like to know what you do to handle large quantity of C code, which authors or books you can recommend.
Separate the code into functional units.
Build those units of code into individual libraries.
Use hidden symbols within libraries to reduce namespace conflicts.
Think of open source code. There is a huge amount of C code in the Linux kernel, the GNU C library, the X Window system and the Gnome desktop project. Yet, it all works together. This is because most of the code does not see any of the other code. It only communicates by well-defined interfaces. Do the same in any large project.
Some people don't like it but I am an advocate of organzing my structs and associated functions together as if they are a class where the this pointer is passed explicitly. For instance, combined with a consistent naming convention to make the namespace explicit. A header would be something like:
typedef struct foo {
int x;
double y;
} FOO_T
FOO_T * foo_new();
int foo_set_x(FOO_T * self, int arg1);
int foo_do_bar(FOO_T * self, int arg1);
FOO_T * foo_delete(FOO_T * self);
In the implementation, all the "private" functions would be static. The downside of this is that you can't actually enforce that the user not go and muck with the members of the struct. That's just life in c. I find this style though makes for nicely reusable C types.
A good way you can achieve some encapsulation is to declare internal methods or variables of a module as static
As Andres says, static is your friend. But speaking of friends... if you want to be able to separate a library in two files, then some symbols from one file that need to be seen in the other can not be static.
Decide of some naming conventions: all non-static symbols from library foo start with foo_. And make sure they are always followed: it is precisely the symbols for which it seems constraining ("I need to call it foo_max?! But it is just max!") that there will be clashes.
As Zan says, a typical Linux distribution can be seen as a huge project written mostly in C. It works. There are interfaces, and large-subprojects are implemented as separate processes. An implementation in separate processes helps for debugging, for testing, for code reuse, and it provides a second hierarchy in addition to the only one that exists at link level. When your project becomes large enough, it may start to make sense to put some of the functionalities in separate processes. Something already as specialized as a C compiler is typically implemented as three processes: pre-processor, compiler, assembler.
If you can control the project (e.g. in-house or you pay someone else to do it) you can simply set rules and use reviews and tools to enforce them. There is no real need for the language to do this, you can for instance demand that all functions usable outside a module (=a set of files, don't even need to be a separate) must be marked thus. In effect, you would force the developers to think about the interfaces and stick with them.
If you really want to make the point, you could define macros to show this as well, e.g.
#define PUBLIC
#define PRIVATE static
or some such.
So you are right, discipline is the key here. It involves setting the rules AND making sure that they are followed.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 7 years ago.
Improve this question
I have little more than beginner-level C skills and would like to know if there are any de facto "standards" to structure a somewhat complex application in C. Even GUI based ones.
I have been always using the OO paradigm in Java and PHP and now that I want to learn C I'm afraid that I might structure my applications in the wrong way. I'm at a loss on which guidelines to follow to have modularity, decoupling and dryness with a procedural language.
Do you have any readings to suggest? I couldn't find any application framework for C, even if I don't use frameworks I've always found nice ideas by browsing their code.
The key is modularity. This is easier to design, implement, compile and maintain.
Identify modules in your app, like classes in an OO app.
Separate interface and implementation for each module, put in interface only what is needed by other modules. Remember that there is no namespace in C, so you have to make everything in your interfaces unique (e.g., with a prefix).
Hide global variables in implementation and use accessor functions for read/write.
Don't think in terms of inheritance, but in terms of composition. As a general rule, don't try to mimic C++ in C, this would be very difficult to read and maintain.
If you have time for learning, take a look at how an Ada app is structured, with its mandatory package (module interface) and package body (module implementation).
This is for coding.
For maintaining (remember that you code once, but you maintain several times) I suggest to document your code; Doxygen is a nice choice for me. I suggest also to build a strong regression test suite, which allows you to refactor.
It's a common misconception that OO techniques can't be applied in C. Most can -- it's just that they are slightly more unwieldy than in languages with syntax dedicated to the job.
One of the foundations of robust system design is the encapsulation of an implementation behind an interface. FILE* and the functions that work with it (fopen(), fread() etc.) is a good example of how encapsulation can be applied in C to establish interfaces. (Of course, since C lacks access specifiers you can't enforce that no-one peeks inside a struct FILE, but only a masochist would do so.)
If necessary, polymorphic behaviour can be had in C using tables of function pointers. Yes, the syntax is ugly but the effect is the same as virtual functions:
struct IAnimal {
int (*eat)(int food);
int (*sleep)(int secs);
};
/* "Subclass"/"implement" IAnimal, relying on C's guaranteed equivalence
* of memory layouts */
struct Cat {
struct IAnimal _base;
int (*meow)(void);
};
int cat_eat(int food) { ... }
int cat_sleep(int secs) { ... }
int cat_meow(void) { ... }
/* "Constructor" */
struct Cat* CreateACat(void) {
struct Cat* x = (struct Cat*) malloc(sizeof (struct Cat));
x->_base.eat = cat_eat;
x->_base.sleep = cat_sleep;
x->meow = cat_meow;
return x;
}
struct IAnimal* pa = CreateACat();
pa->eat(42); /* Calls cat_eat() */
((struct Cat*) pa)->meow(); /* "Downcast" */
All good answers.
I would only add "minimize data structure". This might even be easier in C, because if C++ is "C with classes", OOP is trying to encourage you to take every noun / verb in your head and turn it into a class / method. That can be very wasteful.
For example, suppose you have an array of temperature readings at points in time, and you want to display them as a line-chart in Windows. Windows has a PAINT message, and when you receive it, you can loop through the array doing LineTo functions, scaling the data as you go to convert it to pixel coordinates.
What I have seen entirely too many times is, since the chart consists of points and lines, people will build up a data structure consisting of point objects and line objects, each capable of DrawMyself, and then make that persistent, on the theory that that is somehow "more efficient", or that they might, just maybe, have to be able to mouse over parts of the chart and display the data numerically, so they build methods into the objects to deal with that, and that, of course, involves creating and deleting even more objects.
So you end up with a huge amount of code that is oh-so-readable and merely spends 90% of it's time managing objects.
All of this gets done in the name of "good programming practice" and "efficiency".
At least in C the simple, efficient way will be more obvious, and the temptation to build pyramids less strong.
The GNU coding standards have evolved over a couple of decades. It'd be a good idea to read them, even if you don't follow them to the letter. Thinking about the points raised in them gives you a firmer basis on how to structure your own code.
If you know how to structure your code in Java or C++, then you can follow the same principles with C code. The only difference is that you don't have the compiler at your side and you need to do everything extra carefully manually.
Since there are no packages and classes, you need to start by carefully designing your modules. The most common approach is to create a separate source folder for each module. You need to rely on naming conventions for differentiating code between different modules. For example prefix all functions with the name of the module.
You can't have classes with C, but you can easily implement "Abstract Data Types". You create a .C and .H file for every abstract data type. If you prefer you can have two header files, one public and one private. The idea is that all structures, constants and functions that need to be exported go to the public header file.
Your tools are also very important. A useful tool for C is lint, which can help you find bad smells in your code. Another tool you can use is Doxygen, which can help you generate documentation.
Encapsulation is always key to a successful development, regardless of the development language.
A trick I've used to help encapsulate "private" methods in C is to not include their prototypes in the ".h" file.
I'd suggets you to check out the code of any popular open source C project, like... hmm... Linux kernel, or Git; and see how they organize it.
The number rule for complex application: it should be easy to read.
To make complex application simplier, I employ Divide and conquer.
I would suggest reading a C/C++ textbook as a first step. For example, C Primer Plus is a good reference. Looking through the examples would give you and idea on how to map your java OO to a more procedural language like C.