I create a project for a microcontroller by programming it in C language. Due to its specificity (a microcontroller with a built-in BLE core), I have to use the SDK and a specific project template. How can I test my modules when they have numerous references to other files (modules) in the SDK? (References are needed to use functions to, for example, send data via BLE) Do I have to somehow mock each of the SDK functions? I am using Unity test framework.
Module example:
my_module.c
#include "sdk_module_1.h"
#include "my_module.h"
void init_hardware(void)
{
//function code
}
bool send_data(int data)
{
//prepare data eq.
data++
//send data using SDK function (sdk_module_1.h)
return send_data(data);
}
my_module.h
void init_hardware(void)
void send_data(int data)
my_module_test.c
#include "my_module.h"
#include "//unity files"
TEST_SETUP(Test)
{
}
TEST_TEAR_DOWN(Test)
{
}
TEST(Test, First_test)
{
TEST_ASSERT_EQUAL(send_data(5),true);
}
When I try to test my module, I have a problem with referencing SDK modules and their functions. How can I create tests for such software? Should I change the way my modules are written?
The resource you want is James Grenning's Test Driven Development for Embedded C.
(Note: what follows below is a translation of my ideas into C. If you find a conflict with Grenning's approach, try his first - he has a lot more laps in the embedded space than I do.)
How can I test my modules when they have numerous references to other files (modules) in the SDK?
Sometimes the answer is that you have to change your design. In other words, treating testability as a design constraint, rather than an afterthought.
The way I normally describe it is this: we want to design our code such that (a) all the complicated code is easy to test and (b) anything that's hard to test is "so simple there are obviously no deficiencies".
This often means designing our code so that collaborations between complicated code and hard to test code are configurable, allowing you to provide a substitute implementation (stub/mock/test double) when using the real thing isn't cost effective.
So instead of having A (complicated) directly invoke B (hard for testing), we might instead have A invoke B via a function pointer that, during testing, can be replaced with a pointer to a simpler function.
(In some styles, this gets reversed: the complicated logic points to an inert/stub implementation by default, and you opt in to using the complicated implementation instead.)
In other words, we replace
void A(void) {
B(); // B is the function that makes things hard to test.
}
with
void A(void) {
C(&B);
}
# It's been a long time, please forgive (or correct) the spelling here
void C( void (*fn)()) {
&fn();
}
We test A by looking at it, agreeing that it is "so simple there are obviously no deficiencies", and signing off on it. We test C by passing it pointers to substitute implementations, writing as many substitutes as we need for B to ensure that all of C's edge cases are covered.
Note that "hard to test" can cover a lot of bases - the real function is slow, the real function is unstable, the real function costs money... if it makes testing less pleasant or less effective, then it counts.
Firstly, I would highly recommend using Ceedling to manage and run your unit tests in C. It wraps Unity and CMock very nicely and makes unit testing, particularly for embedded systems, a lot easier than it otherwise would be.
With regards to unit testing when an SDK is involved, you first need to remember that the point of a unit test is to test a unit, so anything outside of that needs to be mocked or stubbed, otherwise it’s not a unit test and cannot be run as one.
So, for something like an I2C hardware abstraction module, any calls to an underlying SDK (that would ordinarily do the actual I2C transaction) need to be mocked so that you can place expectations on what should happen when the call is made instead of the real thing. In this way, the unit test of the I2C HAL module is exercising only its behaviour and how it handles its calls, and nothing else, as it should.
Therefore, all that you generally need to do to unit test with an SDK is to ensure that any module you use has its public functions mocked. You then simply include the mock in the test, rather than the real module.
Now, I find that you don’t really want to mess with the original SDK and its structure for the purpose of unit testing, and moreover you don’t want to change how the real code includes the real modules, so the problem for the test then comes when the SDK function you’re calling in your code sits behind layers of other modules in the SDK, as is often the case. Here, you don’t want to mock everything you’re not using, but you do need to match the include structure. I’ve found that a good way to do this is simply to create a support folder and copy any top-level headers that the code uses from the SDK into it. I then configure the test to include from support before it includes from the SDK.
Once you have this kind of structure in place, unit testing with an SDK is easy. Ceedling will help with all of this.
Related
In our environment we're encountering a problem regarding mocking functions for our library unit tests.
The thing is that instead of mocking whole modules (.c files) we would like to mock single functions.
The library is compiled to an archive file and linked statically to the unit test. Without mocking there isn't any issue.
Now when trying to mock single functions of the library we would get multiple definitions obviously.
My approach now is to use the weak function attribute when compiling/linking the library so that the linker takes the mocked (non-weak) function when linking against the unit test. I already tested it and it seems to work as expected.
The downside of this is that we need many attribute declarations in the code.
My final approach would be to pass some compile or link arguments to the compiler, that every function is automatically declared as a weak symbol.
The question now is: Is there anything to do this in a nice way?
btw: We use clang 8 as a compiler.
James Grenning describes several options to solve this problem (http://blog.wingman-sw.com/linker-substitution-in-c-limitations-and-workarounds). The option "function pointer substitution" gives a high degree of freedom. It works as follows: Replace functions by pointers to functions. The function pointers are initialized to point to the original function, but each pointer can be redirected individually to a test double.
This approach allows to have one single test executable where you can still decide for each test case individually for which function you use a test double and for which you use the original function.
It certainly also comes at a price:
One indirection for each call. But, if you use link-time-optimization the optimizer will most likely eliminate that indirection again, so this may not be an issue.
You make it possible to redirect function calls also in production code. This would certainly be a misuse of the concept, however.
I would suggest using VectorCAST
https://www.vector.com/us/en/products/products-a-z/software/vectorcast/
I've used, unity/cmock and others for unit testing C in the past, but after a while its vary tedious to manually create these for a language that isnt really built around that concept and is very much a heres a Hammer and Chissel the world is yours approach.
VectorCAST abstracts majority of the manual work that is required with tools like Unity/Cmock, we can get results across a project/module sooner and quicker than we did in the past with the other tools.
Is vectorCAST expensive and very much an enterprise level tool? yes... but its defiantly worth its weight in gold. And thats coming from someone who is very old school, manual approach to software development... just text editors, terminals and commandline debuggers.
VetorCAST handles function pointers and pointers extremely well, stubbing functions is easy as two clicks away. It saved our team alot of time... allowing us to focus on results and reducing the feedback loop of development.
I know about the basics about test doubles, mocking, etc. but I'm having problems to test the following
void funA(void) {
/* do some stuff, using mocked functions from 3rd party library */
}
I've written the unit tests for funA(), checking the good functions were called (using their mocked implementation).
So far, the mocked functions are library functions. This is not my code. I don't want to test their original implementation.
Now, I want to test this function
void funB(void) {
/* do some complicated stuff, and call `funA()` on some situations */
}
How can I be sure my funA function was called from funB? I can't add a fake implementation to funA, I need its production code so it can be tested.
What I am doing now is making sure the mocks that funA is calling are as I expect them to be. But it's not a good method, because it's like I'm testing funA all over again, when I just want to make sure funB does its job.
After discussing it (see the comments of the original question), and having a brief forum exchange with James Grenning (one of the author of CppUTest), the main solutions are the following:
Having different test builds for funA() and funB()
Using function pointers to dynamically change the behaviour
I'm not a huge fan of either of the solutions, but it feels like I can't do much more in C. I will eventually go for the multiple binaries solution.
For reference, here is the answer of James Grennings:
You may want to mock A() when testing B().
for example
If I have a message_dispatcher() that reads a command from a serial
port via getline(), and getline() uses getc() and getc() uses IORead
and IOWrite. I could mock IORead and IOWrite and have a set of
horrible tests to test the message_dispatcher().
Or I could test getc() with mock IORead() and IOWrite(), getline()
with some kind of fake_getc(), and test message_dispatcher() with
fake_getline(). If you only use linker substitution, you would need
three tests builds. If you used function pointers you could do one
test build. You can mix an match too.
getc() should be tested with link time mocks of IORead and IOWrite
because your unit tests never want the real IORead and IOWrite for
off-target tests (they may be needed of on-target tests, but those
would be integration tests).
There are many possibilities. You could also have some other code
getline() and feed it to the dispatcher, removing its dependencies.
I am trying to implement TDD in C coding. I am building the program structure in a quite modularised way and using as atomic functions as possible. I make one test file (including several suits) for one module (module = header file + source file). I am struggling to make the program files "not know that they are being tested", in other words - I don't want testing parts of code in the proper program. Therefore almost often I need to include the source file in the test file in order to have access to the "private" variables and functions.
That was the intro, now the problem: if in a module I have an aaa() function, which uses inside a bbb() function, which uses some xxx() function from an external module, I can easily test the bbb() function in the atomic way by mocking the x() function: #define bbb mock_bbb and providing a mock xxx module for #include. However, I am unable to find a way of atomic testing of the aaa() function, which uses a function from the same module. Is it possible to do? (note, that apart of mocking bbb() for aaa(), I have to be also able to use the original bbb() to test it)
My closest try was to use -Wl,-wrap,xxx, but the problem is that I haven't found a way to automate this (wildcard or something?) - I will have almost 100 testing files, each containing several functions to test - I cannot allow myself to put manually every function in the makefile.
I never test "private" functions in an atomic way. I usually unit-test a c-module using its public functions and checking its calls to other modules (using mocks through dependency injection) and checking its private data members (by exposing its private data members with a GetDataPtr()-function that is compiled only for the unit test project).
For me that the best tradeof between effort and complexity of the unit test framework, although it not possible to reach 100% statement coverage in some "private" functions.
I'm a C++ developer and when it comes to testing, it's easy to test a class by injecting dependencies, overriding member functions, and so on, so that you can test edge cases easily. However, in C, you can't use those wonderful features. I'm finding it hard to add unit tests to code because of some of the 'standard' ways that C code is written. What are the best ways to tackle the following:
Passing around a large 'context' struct pointer:
void some_func( global_context_t *ctx, .... )
{
/* lots of code, depending on the state of context */
}
No easy way to test failure on dependent functions:
void some_func( .... )
{
if (!get_network_state() && !some_other_func()) {
do_something_func();
....
}
...
}
Functions with lots of parameters:
void some_func( global_context_t *, int i, int j, other_struct_t *t, out_param_t **out, ...)
{
/* hundreds and hundreds of lines of code */
}
Static or hidden functions:
static void foo( ... )
{
/* some code */
}
void some_public_func( ... }
{
/* call static functions */
foo( ... );
}
In general, I agree with Wes's answer - it is going to be much harder to add tests to code that isn't written with tests in mind. There's nothing inherent in C that makes it impossible to test - but, because C doesn't force you to write in a particular style, it's also very easy to write C code that is difficult to test.
In my opinion, writing code with tests in mind will encourage shorter functions, with few arguments, which helps alleviate some of the pain in your examples.
First, you'll need to pick a unit testing framework. There are a lot of examples in this question (though sadly a lot of the answers are C++ frameworks - I would advise against using C++ to test C).
I personally use TestDept, because it is simple to use, lightweight, and allows stubbing. However, I don't think it is very widely used yet. If you're looking for a more popular framework, many people recommend Check - which is great if you use automake.
Here are some specific answers for your use cases:
Passing around a large 'context' struct pointer
For this case, you can build an instance of the struct with the pre conditions manually set, then check the status of the struct after the function has run. With short functions, each test will be fairly straightforward.
No easy way to test failure on dependent functions
I think this is one of the biggest hurdles with unit testing C.
I've had success using TestDept, which allows run time stubbing of dependent functions. This is great for breaking up tightly coupled code. Here's an example from their documentation:
void test_stringify_cannot_malloc_returns_sane_result() {
replace_function(&malloc, &always_failing_malloc);
char *h = stringify('h');
assert_string_equals("cannot_stringify", h);
}
Depending on your target environment, this may or may not work for you. See their documentation for more details.
Functions with lots of parameters
This probably isn't the answer you're looking for, but I would just break these up into smaller functions with fewer parameters. Much much easier to test.
Static or hidden functions
It's not super clean, but I have tested static functions by including the source file directly, enabling calls of static functions. Combined with TestDept for stubbing out anything not under test, this works fairly well.
#include "implementation.c"
/* Now I can call foo(), defined static in implementation.c */
A lot of C code is legacy code with few tests - and in those cases, it is generally easier to add integration tests that test large parts of the code first, rather than finely grained unit tests. This allows you to start refactoring the code underneath the integration test to a unit-testable state - though it may or may not be worth the investment, depending on your situation. Of course, you'll want to be able to add unit tests to any new code written during this period, so having a solid framework up and running early is a good idea.
If you are working with legacy code, this book
(Working effectively with legacy code by Michael Feathers) is great further reading.
That was a very good question designed to lure people into believing that C++ is better than C because it's more testable. However, it's hardly that simple.
Having written lots of testable C++ and C code both, and an equally impressive amount of untestable C++ and C code, I can confidentially say you can wrap crappy untestable code in both languages. In fact the majority of the issues you present above are equally as problematic in C++. EG, lots of people write non-object encapsulated functions in C++ and use them inside classes (see the extensive use of C++ static functions within classes, as an example, such as MyAscii::fromUtf8() type functions).
And I'm quite sure that you've seen a gazillion C++ class functions with too many parameters. And if you think that just because a function only has one parameter it's better, consider the case that internally it's frequently masking the passed in parameters by using a bunch of member variables. Let alone "static or hidden" functions (hint, remember that "private:" keyword) being just as big of a problem.
So, the real answer to your question isn't "C is worse for exactly the reasons you state" but rather "you need to architect it properly in C, just as you would in C++". For example, if you have dependent functions, then put them in a different file and return the number of possible answers they might provide by implementing a bogus version of that function when testing the super-function. And that's the barely-getting-by change. Don't make static or hidden functions if you want to test them.
The real problem is that you seem to state in your question that you're writing tests for someone else's library that you didn't write and architect for proper testability. However, there are a ton of C++ libraries that exhibit the exact same symptoms and if you were handed one of them to test, you'd be just as equally annoyed.
The solution to all problems like this is always the same: write the code properly and don't use someone else's improperly written code.
When unit testing C you normally include the .c file in the test so you can first test the static functions before you test the public ones.
If you have complex functions and you want to test code calling them then it is possible to work with mock objects. Take a look at the cmocka unit testing framework which offers support for mock objects.
I'm working on an embedded C project that depends on some external HW. I wish to stub out the code accessing these parts, so I can simulate the system without using any HW. Until now I have used some macros but this forces me to change a little on my production code, which I would like to avoid.
Example:
stub.h
#ifdef _STUB_HW
#define STUB_HW(name) Stub_##name
#else /*_STUB_HW*/
#define STUB_HW(name) name
#endif /*_STUB_HW*/
my_hw.c
WORD STUB_HW(clear_RX_TX)()
{ /* clear my rx/tx buffer on target HW */ }
test_my_hw.c
#ifdef _STUB_HW
WORD clear_RX_TX()
{ /* simulate clear rx/tx buffer on target HW */ }
With this code I can turn on/off the stubbing with the preprocessor tag _STUB_HW
Is there a way to acomplish this without having to change my prod code, and avoiding a lot of ifdefs. And I won't mix prod and test code in the same file if I can avoid it. I don't care how the test code looks as long as I can keep as much as possible out of the production code.
Edit:
Would be nice if it was posible to select/rename functions without replacing the whole file. Like take all functions starting on nRF_## and giving then a new name and then inserting test_nRF_## to nRF_## if it is posible
I just make two files ActualDriver.c and StubDriver.c containing exactly the same function names. By making two builds linking the production code against the different objects there is no naming conflicts. This way the production code contains no testing or conditional code.
As Gerhard said, use a common header file "driver.h" and separate hardware layer implementation files containing the actual and stubbed functions.
In eclipse, I have two targets and I "exclude from build" the driver.c file that is not to be used and make sure the proper one is included in the build. Eclipse then generates the makefile at build time.
Another issue to point out is to ensure you are defining fixed size integers so your code behaves the same from an overflow perspective. (Although from your code sample I can see you are doing that.)
I agree with the above. The standard solution to this is to define an opaque abstracted set of function calls that are the "driver" to the hw, and then call that in the main program. Then provide two different driver implementations, one for hw, one for sw. The sw variant will simulate the IO effect of the hw in some appropriate way.
Note that if the goal is at a lower level, i.e., writing code where each hardware access is to be simulated rather than entire functions, it might be a bit tricker. But here, different "write_to_memory" and "read_from_memory" functions (or macros, if speed on target is essential) could be defined.
There is no need in either case to change the names of functions, just have two different batch files, make files, or IDE build targets (depending on what tools you are using).
Finally, in many cases a better technical solution is to go for a full-blown target system simulator, such as Qemu, Simics, SystemC, CoWare, VaST, or similar. This lets you run the same code all the time, and instead you build a model of the hardware that works like the actual hardware from the perspective of the software. It does take a much larger up-front investment, but for many projects it is well worth the effort. It basically gets rid of the nasty issue of having different builds for target and host, and makes sure you always use your cross-compiler with deployment build options. Note that many embedded compiler suites come with some basic such simulation ability built in.