Ceedling testing multiple build variants - c

I have Ceedling working well on a particular code base. But this code base can be compiled with different options based on #defines: for example, add -DSUPPORT_FLOAT, or -DSUPPORT_HEX will compile the code differently. I would like to be able to run the whole test suite on each build variant; that is, compile with one define, test all, compile with a different define, test all, etc.
I have not been able to find/understand how to do this from the documentation.

Related

Why do we use "make" command when "cc ex1.c -o ex1" compiles the code written in "exe1.c" file

I was learning C programming in Linux while I came across this line of code
$ make ex1
cc ex1.c -o ex1
My question is why do we have to use make ex1? Isn't the cc command building the program and other necessary files.
With the exception of small set of very simple problem, almost all real-life C programs will be built from multiple modules, header files, external libraries, sometimes spanning over multiple folders. In some cases, additional code may be linked in using different tools (e.g., code generators).
For those cases, single 'cc' command is not going to work. The next solution will be to automated the build using build script. However, this can be time consuming to build, and almost impossible to maintain.
For building "C" programs, Make provides the many benefits on top of a simple shell build script. This is my personal "top 3"
Incremental build - when code files are modified, make can identify the execute the minimal set of build instructions, instead of rebuilding the whole code. This can provide a major efficiency boost to developers.
Rule based build - make uses rules to produce targets. Once you define a rule (one obvious rule: compile a ".c" file to ".o"), they can be applied consistently on all files.
provides setup for complete build process - including installation of code, cleanup, packaging, test, etc. Very important is that make can integrate (almost) any Unix tool into the build process - code generation, etc.
Needless to say, there are other build tools which provide additional/alternate benefits. CMake, gradle, SCons, to name a few.
For a one-file project, they come out to about the same. However, real-world projects tend to have tens, hundreds, or thousands of files and build tens or hundreds of libraries and executables, possibly with different settings etc. Managing all of this without storing the compilation commands somewhere would be impossible, and a Makefile is a very useful "somewhere."
Another important property of make is that it does timestamp checks on files and only re-runs the compilation/linking commands whose outputs are outdated, i.e. at least one of their inputs is newer than the outputs. Without such checks, you'd have to manually remember which files to recompile when you change something (especially difficult when the changed file is a header), or always recompile everything, increasing build times by orders of magnitude.

How to determine if platform library is static or dynamic from autotools?

Configuration
I use autotools (autoreconf -iv and ./configure) to generate correct makefiles. On my development machine (Fedora) everything works correctly. For make check I use the library libcheck and from autotools I use Libtools. On Fedora the library for check is dynamic: libcheck.so.0.0.0 or something such. It works.
The Issue
When I push the commits to my repo on github and do a pull request, the result is tested on Travis CI which uses Ubuntu as a platform. Now on Ubuntu the libcheck is a static library: libcheck.a and a libcheck_pic.a.
When Travis does a make check I get the following error message:
/usr/bin/ld: /usr/bin/../lib/gcc/x86_64-linux-gnu/4.9/../..
/../libcheck.a(check.o): relocation R_X86_64_32 against.rodata.str1.1' can
not be used when making a shared object; recompile with -fPIC`
/usr/bin/../lib/gcc/x86_64-linux-gnu/4.9/../../../libcheck.a: could not read
symbols: Bad value
Which means I have to somehow let configure determine what library I need. I suspect I need libcheck_pic.a for Ubuntu and the regular libcheck.so for Fedora.
The question
Does anyone know how to integrate this into configure.ac and test/Makefile.am using libtool? I would prefer to stay in line with autotools way of life.
I couldn't find usable information using google, but there is a lot of questions out there about what the difference is between static and dynamic (which is not what I need).
I would much appreciate if anyone could point me in the right direction, or maybe even solved this already?
I suspect you're right that the library you want to use on the CI system is libcheck_pic.a, for its name suggests that the routines within are compiled as position-independent code, exactly as the error message you receive suggests that you do.
One way to approach the problem, then, would be to use libcheck_pic if it is available, and to fall back to plain libcheck otherwise. That's not too hard to configure your Autotools-based build system to do. You then record the appropriate library name in an output variable, and use that in your (auto)make files.
Autoconf's SEARCH_LIBS macro specifically serves this kind of prioritized library search requirement, but it has the side effect, likely unwanted in this case, of modifying the LIBS variable. You can nevertheless make it work. Something like this might do it, for example:
LIBS_save=$LIBS
AC_SEARCH_LIBS([ck_assert], [check_pic check], [
# Optional: add a test to verify that the chosen lib really provides PIC code.
# Set LIBCHECK to the initial substring of $LIBS up to but excluding the first space.
LIBCHECK=${LIBS%% *}
], [
# or maybe make it a warning, and disable your test suite when libcheck
# is not available.
AC_MSG_ERROR([A PIC version of libcheck is required])
])
AC_OUTPUT([LIBCHECK])
LIBS=$LIBS_save
I presume you know what to do with $(LIBCHECK) on the Make side.
As written, that has the limitation that if there is no PIC version of libcheck available then you won't find out until make or maybe make check. That's undesirable, and you could add Autoconf code to detect that situation if it is undesirable enough.
As an altogether different approach, you could consider building your tests statically (add -static to the appropriate *_LDFLAGS variable). Of course, this has the opposite problem: if a static version of the library is not available, then the build or tests fail. Also, it requires building a static version of your own code if you're not doing so already, and furthermore, it is the static version that will be tested.
For greatest flexibility, you could consider combining those two approaches. You might set up a fallback from one to the other, or you might set up separate targets for testing static code and PIC code, and exercise whichever of them (possibly both) is supported by the libraries available on the build system.

Check Unit Test - Separating Test Suites Into Different Files

I have a small library that uses helper .c files to do various tasks for the API. I would like to separate the test suites that test each component into different files. Is there a way to do this in Check?
For instance, if I had a Money library (as in the example) I might want to write a currency conversion library test suite it in its own file (check_convert_currency.c) I might want to create, track, etc in a different test suite (check_manipulate_money.c). I would like to check all test suites with check_money.c.
I think the best way to do this would be to create the .c files and headers for the above, include them in the check_money.c and add all test suites to the suite runner in main.
I would like to do this to keep the test files readable. If there is a better method or approach in attaining this goal, I am open to learning about it.
One approach used is to have one make file for each test file. This way you will have multiple test executables for multiple aspects of the same unit under test. So you would compile with different test executable - check_convert_currency and check_manipulate_money.
If you want to have the same executable for all tests then you can have header only implementations of the tests in check_manipulate_money.h and check_convert_currency.h .
Thanks

embedded c code and unit tests without cross compile

i am starting to learn unit testing. I use unity and it works well with mingw in eclipse on windows. I use different configurations for debug, release and tests. This works well with the cdt-plugin.
But my goal is to unit test my embedded code for an stm. So i use arm-gcc with the arm-gcc eclipse plugin. I planned to have a configuration for compiling the debug and release code for the target and a configuration using mingw to compile and excute the tests on the pc (just the hardware independet parts).
With the eclipse plugin, i cannot compile code that is not using the arm-gcc.
Is there a way, to have one project with configurations and support for the embedded target and the pc?
Thanks
As noted above you need a makefile pointing at two different targets with different compiler options depending on the target.
You will need to ensure portability in your code.
I have accomplished this most often using CMake and outlining different compiler paths and linker flags for unit tests versus the target. This way I can also easily link in any unit test libraries while keeping them external to my target. In the end CMake produces a Makefile but I'm not spending time worrying about make syntax which while I can read often seems like voodoo.
Doing this entirely within a single Eclipse project is possible. You need to configure your project for multiple targets with different compilers used for each and will require some coaxing to get eclipse to behave.
If you're goal is to do it entirely within Eclipse I suggest reading this as a primer.
If you want to go the other route, here is a CMake primer.
Short answer: Makefile.
But I guess NEON assemblies are a bigger issue.
Using intrinsics instead is at least open to the possibility to link to a simulator library, and there are indeed a lot of such libraries written in standard C that allow code with intrinsics to be portable.
However the poor performance of GCC Neon intrinsics forces a lot of people to sacrifice portability for performance.
If your code unfortunately contains assembly, you won't be able to even compile the code before translating assemblies back to standard C.

API sanity autotest help needed

I am trying to auto-generate Unit Tests for my C code using API sanity autotest.
But, the problem is that it is somewhat complex to use, and some tutorials / howto / other resources on how to use it would be really helpful.
Have you had any luck with API sanity autotest?
Do you think there's a better tool that can be used to auto-generate unit tests for C code?
It is a better tool (among free solutions for Unix) to fully automatically generate smoke tests if your library contains more than hundred functions. The unique feature is an ability to automatically generate reasonable input arguments for each function.
The most popular use case of this framework is a quick search for memory problems (segfaults) in the library. Historically, this framework was used to create LSB certification test suites for too big libraries like Qt3 and Qt4 that cannot be created manually in reasonable time.
Use the following command to generate, build and execute tests:
api-sanity-checker -l name -d descriptor.xml -gen -build -run
XML descriptor is a simple XML file that specifies version number, paths to headers and shared objects:
<version>
0.3.4
</version>
<headers>
/usr/local/libssh/0.3.4/include/
</headers>
<libs>
/usr/local/libssh/0.3.4/lib/
</libs>
You can improve generated tests using specialized types for input parameters.
See example of generated tests for freetype2 2.4.8.
It's a recipe for disaster in the first place. If you auto-generate unit tests, you're going to get a bunch of tests that don't mean a lot. If you have a library that is not covered in automated tests then, by definition, that library is legacy code. Consider following the conventional wisdom for legacy code...
For each change:
Pin behavior with tests
Refactor to the open-closed principle (harder to do with C but not impossible)
Drive changes for new code with tests
Also consider picking up a copy of Working Effectively with Legacy Code.
EDIT:
As a result of our discussion, it has become clear that you only want to enforce some basic standards, such has how null pointer values are handled, with your generated tests. I would argue that you don't need generated tests. Instead you need a tool that inspects a library and exercises its functions dynamically, ensuring that it meets some coding standards you have defined. I'd recommend that you write this tool, yourself, so that it can take advantage of your knowledge of the rules you want enforced and the libraries that are being tested.

Resources