Check Unit Test - Separating Test Suites Into Different Files - c

I have a small library that uses helper .c files to do various tasks for the API. I would like to separate the test suites that test each component into different files. Is there a way to do this in Check?
For instance, if I had a Money library (as in the example) I might want to write a currency conversion library test suite it in its own file (check_convert_currency.c) I might want to create, track, etc in a different test suite (check_manipulate_money.c). I would like to check all test suites with check_money.c.
I think the best way to do this would be to create the .c files and headers for the above, include them in the check_money.c and add all test suites to the suite runner in main.
I would like to do this to keep the test files readable. If there is a better method or approach in attaining this goal, I am open to learning about it.

One approach used is to have one make file for each test file. This way you will have multiple test executables for multiple aspects of the same unit under test. So you would compile with different test executable - check_convert_currency and check_manipulate_money.
If you want to have the same executable for all tests then you can have header only implementations of the tests in check_manipulate_money.h and check_convert_currency.h .
Thanks

Related

Ceedling testing multiple build variants

I have Ceedling working well on a particular code base. But this code base can be compiled with different options based on #defines: for example, add -DSUPPORT_FLOAT, or -DSUPPORT_HEX will compile the code differently. I would like to be able to run the whole test suite on each build variant; that is, compile with one define, test all, compile with a different define, test all, etc.
I have not been able to find/understand how to do this from the documentation.

Unit test large projects with Ceedling

I've used Ceedling in the past on bare-metal projects that don't have many vendor libraries, so creating unit tests and mocking dependencies was pretty easy.
Now I'm trying to integrate Ceedling on a very large project. On this project I'm implementing an application on top of an existing OS and set of libraries provided by some manufacturer. I have the source code for most of it, but not all of it, so my control over those libraries is limited, especially when it reaches low level modules like the scheduler and communications with the other CPU.
So... I decided to give it a go by implementing a very simple unit test that only has one dependency, which I tried to mock. It turned out to be a nightmare right off the bat. That file ended up including a bunch of unrelated modules, so my idea was to mock all those modules until Ceedling stopped complaining. However, I hit a wall when I saw that some of them are automatically generated at compile time in a very obscure way by the ADK.
My question is then, how does one unit test this kind of projects? I feel that I'm missing something very fundamental when dealing with this type of projects, but I can't figure out what.

Unity test on embedded systems to web output

I'm currently working on an embedded device which is able to run unit tests, thanks to the Unity framework. I send the output of these tests into my computer with a JLINK and SEGGER_RTT. Question is, how can i make a web report from unity output?
The best lead I have found, was to transform Unity content to JUnit, in order to have more libraries to work with. Problem is, i didn't find the best approach to have with this JUnit. The idea is to have almost nothing to install, to be able to run tests on a new computer and to have an ergonomic/modern web UI to fastly treat unit tests. The best library i found was Allure (https://github.com/allure-framework/allure2), but I was wondering if it was the best approach (many things to install, and to do before i have anything).
Thomas, have you looked at Ceedling (from the same people that make Unity)? Check out the plugins for it https://github.com/ThrowTheSwitch/Ceedling/tree/master/plugins some of which allow the format of the test ouptut to be adjusted.
Basically Ceedling provides a Ruby build system for Unity with lots of added features like mock generation and the plugin structure - you only need to use the bits you want though
One of the plugins, gcov, also generates test coverage information, which ceedling can also use to generate a HTML test coverage report similar to below

Structuring procedural code for unit tests

If you're planning to write unit tests for a C program, what is the convention for placement of the main function? Do you put it in it's own separate file, with the functions in another file, that way you can include the functions file inside a test without having conflicts with two main methods? This makes sense to me but just wondering if there's a certain convention.
I ask because we have several SQR programs at work that are a difficult to maintain, and I'd like to take a stab at getting them under test, but I need a way to call the functions from another file, so I figured my first step would be to take the begin-program - end-program section and stick it in a separate file.
It is good practice use a unit testing framework such as Check, Cmocka, etc. This frameworks offer many useful test functions and macros, and provide a unit test harness which handle the reporting of failures. A framework like CUnit also isolates tests from each other by forking them, this protecting the test harness from buffer overruns, and preventing tests from interfering with each other.
If you do want to or have to roll your own, then I would put the "Harness" code in one c module and put the unit tests in a separate file. You would also want the tests to register themselves with the harness and not the other way round. Possible then you could reuse the harness code elsewhere if your platform does not have other unit test frameworks ported to it.

API sanity autotest help needed

I am trying to auto-generate Unit Tests for my C code using API sanity autotest.
But, the problem is that it is somewhat complex to use, and some tutorials / howto / other resources on how to use it would be really helpful.
Have you had any luck with API sanity autotest?
Do you think there's a better tool that can be used to auto-generate unit tests for C code?
It is a better tool (among free solutions for Unix) to fully automatically generate smoke tests if your library contains more than hundred functions. The unique feature is an ability to automatically generate reasonable input arguments for each function.
The most popular use case of this framework is a quick search for memory problems (segfaults) in the library. Historically, this framework was used to create LSB certification test suites for too big libraries like Qt3 and Qt4 that cannot be created manually in reasonable time.
Use the following command to generate, build and execute tests:
api-sanity-checker -l name -d descriptor.xml -gen -build -run
XML descriptor is a simple XML file that specifies version number, paths to headers and shared objects:
<version>
0.3.4
</version>
<headers>
/usr/local/libssh/0.3.4/include/
</headers>
<libs>
/usr/local/libssh/0.3.4/lib/
</libs>
You can improve generated tests using specialized types for input parameters.
See example of generated tests for freetype2 2.4.8.
It's a recipe for disaster in the first place. If you auto-generate unit tests, you're going to get a bunch of tests that don't mean a lot. If you have a library that is not covered in automated tests then, by definition, that library is legacy code. Consider following the conventional wisdom for legacy code...
For each change:
Pin behavior with tests
Refactor to the open-closed principle (harder to do with C but not impossible)
Drive changes for new code with tests
Also consider picking up a copy of Working Effectively with Legacy Code.
EDIT:
As a result of our discussion, it has become clear that you only want to enforce some basic standards, such has how null pointer values are handled, with your generated tests. I would argue that you don't need generated tests. Instead you need a tool that inspects a library and exercises its functions dynamically, ensuring that it meets some coding standards you have defined. I'd recommend that you write this tool, yourself, so that it can take advantage of your knowledge of the rules you want enforced and the libraries that are being tested.

Resources