CMockery Mock, Duplicate Symbol Error - c

I'm having the problem with CMockery mocks that there are coming duplicate symbol warnings.
The implementation of the code is quite long, so it's in a Gist here.
The Gist includes the test (.c), implementation (.c) and header file, the project is build with CMake and tested with CTest, using CMockery.
The actual error is:
ld: duplicate symbol _wit_configuration_file_path in ../libwatchedit.a(configuration.c.o) and CMakeFiles/libwatcheditTest.dir/configuration_test.c.o for architecture x86_64
The work-around I was able to come up with was to declare char *wit_configuration_file_path() as static. As the implementation is in the same file as the implementation of int wit_load_configuration(wit_configuration config) I expected that would work, it does in fact compile and link cleanly. Unfortunately though, and likely as a side-effect of declaring wit_configuration_file_path() as static, it never uses the mock.
The google examples for cmockery are too contrived, and to not explain how one should deal with this.
It's also possible that it would be smarter, and easier to test to declare the function not as :
int wit_load_configuration(wit_configuration config);
but rather as:
int wit_load_configuration(char* filepath, wit_configuration, config);
In which case, I don't need to mock or stub anything; but I believe that the problem will come back to bite me, as I expect that I'll need to mock something in the future (otherwise how could one write comprehensive unit tests?)
1: How should I do this properly, static means it never uses my mock, declaring it without static causes duplicate symbol errors.
2: Should I change the design of my API? It would work for this case, but I'd like to know how to mock a function properly.
3: Is it a mistake to link my tests against my whole library, I'm using CMake, and the line target_link_libraries(libwatcheditTest watchedit) in my test's CMakeLists.txt.
Update: I added some more build output here for help with diagnosis

You try to mock out *wit_configuration_file_path* with is in the same source file as the function under test *wit_load_configuration*. This is not possible. The linker sees the original implementation of wit_configuration_file_path and also the mocked version.
CMockery has a pretty course grained scope for the mocking. In the end, you can only mock out complete source files. If two functions are in the same source file, they are either both mocked out or they are both in the "System under Test".
What to do in this case? Don't mock out *wit_configuration_file_path* or place it in a different source file. As *wit_load_configuration* and *wit_configuration_file_path* are so highly related, probably not mocking the function out would be the best solution.

Related

Enable mocking for unit testing a library in C

In our environment we're encountering a problem regarding mocking functions for our library unit tests.
The thing is that instead of mocking whole modules (.c files) we would like to mock single functions.
The library is compiled to an archive file and linked statically to the unit test. Without mocking there isn't any issue.
Now when trying to mock single functions of the library we would get multiple definitions obviously.
My approach now is to use the weak function attribute when compiling/linking the library so that the linker takes the mocked (non-weak) function when linking against the unit test. I already tested it and it seems to work as expected.
The downside of this is that we need many attribute declarations in the code.
My final approach would be to pass some compile or link arguments to the compiler, that every function is automatically declared as a weak symbol.
The question now is: Is there anything to do this in a nice way?
btw: We use clang 8 as a compiler.
James Grenning describes several options to solve this problem (http://blog.wingman-sw.com/linker-substitution-in-c-limitations-and-workarounds). The option "function pointer substitution" gives a high degree of freedom. It works as follows: Replace functions by pointers to functions. The function pointers are initialized to point to the original function, but each pointer can be redirected individually to a test double.
This approach allows to have one single test executable where you can still decide for each test case individually for which function you use a test double and for which you use the original function.
It certainly also comes at a price:
One indirection for each call. But, if you use link-time-optimization the optimizer will most likely eliminate that indirection again, so this may not be an issue.
You make it possible to redirect function calls also in production code. This would certainly be a misuse of the concept, however.
I would suggest using VectorCAST
https://www.vector.com/us/en/products/products-a-z/software/vectorcast/
I've used, unity/cmock and others for unit testing C in the past, but after a while its vary tedious to manually create these for a language that isnt really built around that concept and is very much a heres a Hammer and Chissel the world is yours approach.
VectorCAST abstracts majority of the manual work that is required with tools like Unity/Cmock, we can get results across a project/module sooner and quicker than we did in the past with the other tools.
Is vectorCAST expensive and very much an enterprise level tool? yes... but its defiantly worth its weight in gold. And thats coming from someone who is very old school, manual approach to software development... just text editors, terminals and commandline debuggers.
VetorCAST handles function pointers and pointers extremely well, stubbing functions is easy as two clicks away. It saved our team alot of time... allowing us to focus on results and reducing the feedback loop of development.

Test embedded code by replacing static symbols at compile time

Background
I'm building a C application for an embedded Cortex M4 TI-RTOS SYS/BIOS target, however this question should apply to all embedded targets where a single binary is loaded onto some microprocessor.
What I want
I'd like to do some in situ regression tests on the target where I just replace a single function with some test function instead. E.g. a GetAdcMeasurement() function would return predefined values from a read-only array instead of doing the actual measurement and returning that value.
This could of course be done with a mess of #ifndefs, but I'd rather keep the production code as untouched as possible.
My attempt
I figure one way to achieve this would be to have duplicate symbol definitions at the linker stage, and then have the linker prioritise the definitions from the test suite (given some #define).
I've looked into using LD_PRELOAD, but that doesn't really seem to apply here (since I'm using only static objects).
Details
I'm using TI Code Composer, with TI-RTOS & SYS/BIOS on the Sitara AM57xx platform, compiling for the M4 remote processor (denoted IPU1).
Here's the path to the compiler and linker
/opt/ti/ccsv7/tools/compiler/ti-cgt-arm_16.9.6.LTS/bin/armcl
One solution could be to have multiple .c files for each module, one the production code and one the test code, and compile and link with one of the two. The globals and function signatures in both .c file must be at least the same (at least: there may be more symbols but not less).
Another solution, building on the previous one, is to have two libraries, one with the production code and one with the test code, and link with one of both. You could ieven link with both lubraries, with the test version first, as linkers often resolve symbols in the order they are encountered.
And, as you said, you could work with a bunch of #ifdefs, which would have the advantage of having just one .c file, but making tyhe code less readable.
I would not go for #ifdefs on the function level, i.e. defining just one function of a .c file for test and keeping the others as is; however, if necessary, it could be away. And if necessary, you could have one .c file (two) for each function, but that would negate the module concept.
I think the first approach would be the cleanest.
One additional approach (apart from Paul Ogilvie's) would be to have your mocking header also create a define which will replace the original function symbol at the pre-processing stage.
I.e. if your mocking header looks like this:
// mock.h
#ifdef MOCKING_ENABLED
adcdata_t GetAdcMeasurement_mocked(void);
stuff_t GetSomeStuff_mocked(void);
#define GetAdcMeasurement GetAdcMeasurement_mocked
#define GetSomeStuff GetSomeStuff_mocked
#endif
Then whenever you include the file, the preprocessor will replace the calls before it even hits the compiler:
#include "mock.h"
void SomeOtherFunc(void)
{
// preprocessor will change this symbol into 'GetAdcMeasurement_mocked'
adcdata_t data = GetAdcMeasurement();
}
The approach might confuse the unsuspected reader of your code, because they won't necessarily realize that you are calling a different function altogether. Nevertheless, I find this approach to have the least impact to the production code (apart from adding the include, obviously).
(This is a quick sum up the discussion in the comments, thanks for answers)
A function can be redefined if it has the weak attribute, see
https://en.wikipedia.org/wiki/Weak_symbol
On GCC that would be the weak attribute, e.g.
int __attribute__((weak)) power2(int x);
and on the armcl (as in my question) that would be the pragma directive
#pragma weak power2
int power2(int x);
Letting the production code consist of partly weak functions will allow a test framework to replace single functions.

How do I replace a function by a mock implementation when unit-testing a shared library?

I am having the opposite problem that Can you compile a shared object to prefer local symbols even if it's being loaded by a program compiled with -rdynamic? is solving.
Using the naming from the linked question, I have a dynamic library where baz calls bar, and I have a test binary exercising the library, which substitutes its own fake implementation of bar for test purposes. This works fine on Linux, because -rdynamic is used to link.
The source of the test is https://github.com/apache/qpid-dispatch/blob/b172f501028b36d786b4c83bcee1e195cd17fcf2/tests/timer_test.c. The functions being mocked are, among others qd_server_timeout and qd_timer_now (that is the inlined one, see comments).
I am at a loss how to achieve the same on macOS. What are the correct linker options there?
For now, it is resolved by adding __attribute__((noinline)) to bar (commit), and linking with -Wl,-flat_namespace (commit). The -export_dynamic was actually not needed.
I am still looking for alternative solutions.

Override a C function defined in a static library

I have a static library of C files, compiled with g++ on Cygwin. I wish to unit test one function that is defined in the library. That function calls another function defined in that library and I wish to override the dependency to replace it with my own version of that function. I can't modify what's in the static library, so this solution [ Override a function call in C ] doesn't apply.
Usually, I can write a .cpp file and include the .c file containing the function I want to unit test, which essentially extends that file with the code I add. It's a dirty trick I'd never use for production code but it's handy for unit testing C files, because it gives my test code access to static things in that C file. Then, I can write in my fake version of the dependency, and my unit test function that calls the function I'm testing. I compile my.cpp to get my.o, then link it with the static library. In theory, since the linker has found a definition for the dependency already (the one I provide) it won't look in the library and there will be no conflict. Usually this works, but now I'm getting a "multiple definition" error where the linker first finds my fake and then finds the real one. I don't know what might cause this and don't know what to look for. I also can't boil this down to a simple example because my simple examples don't have this problem.
Ideas please?
One possibility (admittedly, and ugly one, but...) is to extract the individual object files from the static library. If the function you're calling and the function it's calling are in separate object files, you can link against the object file containing the function you need to call, but not against the one containing the function it calls.
This only gives you granularity on the level of complete object files though, so if the two functions involved are both in the same object file, it won't work. If you really need to get things to work, and don't mind making a really minor modification to the object file in question, you may be able to use a binary editor to mark the second function as a weak external, which means it'll be used in the absence of any other external with the same name, but if another is provided, that will be used instead.
Whether that latter qualifies as "modifying the library" or not depends a bit on your viewpoint. It's not modifying the code in the library, but is modifying a bit of the object file wrapper around that code. My guess is that you'd rather not do it, but it may still be the cleanest way out of an otherwise untenable situation.
It turns out the reason the linker found both defintions of the function is that the faked function's source file defined a variable which is extern'ed in its header file. That unresolved external in the header file caused the linker to link the faked function's object file (the whole thing) to the tested function's file inside the library. So, it's impossible to extract the definition of the tested function without the definition for the dependency.
What I ended up doing was similar to Override a function call in C where I used a different function name instead of the same one, and a preprocessor directive to swap the two. I put the preprocessor directive and the fake function in a separate file which can be included in a unit test, so the production code in the library does not have to be touched. Plus, if I want to fake that same function for another unit test somewhere else, I can re-use the new file.
Depending on your platform and performance requirements, you might be able to use pin to dynamically modify the application and replace one function with another at runtime.
There's no direct example in the manual, but you could easily modify one of the sample pin tools to do this.

Error While Linking Multiple C Object files in Delphi 2007

I am new to delphi. I was trying to add C Object files in my Delphi project and link them directly since Delphi Supports C Object Linking. I got it working when i link a single Object file. But when i try to link multiple object files, i am getting error 'Unsatisfied forward or external declaration'. I have tried this in Delphi 2007 as well as XE.So what am i doing wrong here?
Working Code:
function a_function():Integer;cdecl;
implementation
{$Link 'a.obj'}
function a_function():Integer;cdecl;external;
end.
Error Code:
function a_function():Integer;cdecl;
function b_function();Integer;cdecl;
function c_function();Integer;cdecl;
implementation
{$LINK 'a.obj'}
{$LINK 'b.obj'}
{$LINK 'c.obj'}
function a_function():Integer;cdecl;external;
function b_function();Integer;cdecl;external;
function c_function();Integer;cdecl;external;
end.
As an aside, the article linked by #vcldeveloper has a good explanation of some of the common issues. The trick of providing missing C RTL functions in Pascal code is excellent and much quicker than trying to link in the necessary functions as C files, or even as .obj files.
However, I have a suspicion that I know what is going on here. I use this same approach but in fact have over 100 .obj files in the unit. I find that when I add new ones, I get the same linker error as you do. The way I work around this is to try re-ordering my $LINK instructions. I try to add the new obj files one by one and I have always been able, eventually, to get around this problem.
If your C files are totally standalone then you could put each one in a different unit and the linker would handle that. However, I doubt that is the case and indeed I suspect that if they really were standalone then this problem would not occur. Also, it's desirable to have the $LINK instructions in a single unit so that any RTL functions that need to be supplied can be supplied once and once only (they need to appear in the same unit as the $LINK instructions).
This oddity in the linker was present in Delphi 6 and is present in Delphi 2010.
EDIT 1: The realisation has now dawned on me that this issue is probably due to Delphi using a single pass compiler. I suspect that the "missing external reference" error is because the compiler processes the .obj files in the order in which they appear in the unit.
Suppose that a.obj appears before b.obj and yet a.obj calls a function in b() b.obj. The compiler wouldn't know where b() resides at the point where it needs to fixup the function call. When I find the time, I going to try and test if this hypothesis is at the very least plausible!
Finally, another easy way out of the problem would be to combine a.c, b.c and c.c into a single C file which would I believe bypass this issue for the OP.
Edit 2: I found another Stack Overflow question that covers this ground: stackoverflow.com/questions/3228127/why-does-the-order-of-linked-object-file-with-l-directive-matter
Edit 3: I have found another truly wonderful way to work around this problem. Every time the compiler complains
[DCC Error] Unit1.pas(1): E2065 Unsatisfied forward or external declaration: '_a'
you simply add, in the implementation section of the unit, a declaration like so:
procedure _a; external;
If it is a routine that you wish to call from Delphi then you clearly need to get the parameter list, calling conventions etc. correct. Otherwise, if it is a routine internal to the external code, then you can ignore the parameter list, calling conventions etc.
To the best of my knowledge this is the only way to import two objects that refer to each other in a circular manner. I believe that declaring an external procedure in this way is akin to making a forward declaration. The difference is that the implementation is provided by an object rather than Pascal code.
I've now been able to add a couple of more tools to my armory – thank you for asking the question!

Resources