I'm new to unit testing and have mostly programmed using IDEs, therefore I haven't created and/or modified makefiles before.
Now that I'm exploring Unit Testing and TDD in general; I'm not sure how to set up the development environment so that my unit tests automatically run on every build.
Please help. A general procedure to achieve this would do wonders.
I have not tried anything yet as I'm not very familiar with modifying C Make files.
Here is a simple Makefile I use for small TDD projects using criterion:
CC=gcc
RELEASE_CFLAGS=-ansi -pedantic -Wall -Werror -Wextra
TESTS_CFLAGS=-pedantic -Wall -Werror -Wextra
TESTS_LDFLAGS=-lcriterion
RELEASE_SRC=$(shell find src/ -type f -name '*.c')
RELEASE_OBJ=$(subst src/,obj/,$(RELEASE_SRC:.c=.o))
TESTS_SRC=$(shell find tests/src/ -type f -name '*.c')
TESTS_OBJ=$(subst src/,obj/,$(TESTS_SRC:.c=.o))
TESTS_BIN=$(subst src/,bin/,$(TESTS_SRC:.c=))
default: run-tests
obj/%.o: src/%.c
$(CC) $(RELEASE_CFLAGS) -c $^ -o $#
tests/obj/%.o: tests/src/%.c
$(CC) $(TESTS_CFLAGS) -c $^ -o $#
tests/bin/%: tests/obj/%.o $(RELEASE_OBJ)
$(CC) $(TESTS_LDFLAGS) $^ -o $#
# prevent deleting object in rules chain
$(TESTS_BIN): $(RELEASE_OBJ) $(TESTS_OBJ)
run-tests: $(TESTS_BIN)
./$^ || true
clean:
rm -f $(RELEASE_OBJ) $(TESTS_OBJ)
clean-all: clean
rm -f $(TESTS_BIN)
It compiles production code with C89 (-ansi) and tests code with unspecified standard.
Files in src/ are moved to obj/, same thing for tests/src/ and tests/obj/.
Tests binaries (AKA test suites) depends on every source files and are included in each test binary, making them bigger but it's not a problem for small projects. If binaries size is an issue, you'll have to specify which object to include for each binary.
Directory structure is made with this command:
mkdir -p src obj tests/{src,obj,bin}
An exemple test file:
#include <criterion/criterion.h>
#include "../../src/fibo.h"
Test(fibonacci, first_term_is_0)
{
// given
int term_to_compute = 0;
// when
int result = fibonacci(term_to_compute);
// then
cr_assert_eq(result, 0);
}
Test(fibonacci, second_term_is_1)
{
// given
int term_to_compute = 1;
// when
int result = fibonacci(term_to_compute);
// then
cr_assert_eq(result, 1);
}
And the associated production code:
#include "fibo.h"
unsigned long fibonacci(unsigned int term_to_compute)
{
return term_to_compute;
}
As you can see, production code is quite dumb and it needs more tests, because it only meets specified requirements (unit tests).
EDIT: Check the Make documentation to learn more about syntax, builtin functions, etc.
If you want to learn more about TDD, YouTube has a lot to offer (live codings, explanations, TDD katas): check Robert C Martin (Uncle Bob), Continuous Delivery channel, etc.
PS: returning a long is not the best option here, you could want fixed size integers to have same result on different platforms, but the question was about "how-to TDD". If you're new to TDD, writing the given/when/then might help. Write tests first, and think about edge cases (like, specify overflow ?).
I use similar setup when doing NASM TDD and testing with C.
Related
I’m taking a class where we are learning c, our professor told us to install bash in wsl and to use makefiles to run our code.
I often have small mistakes in my code the first time I run, so it is frustrating having to type:
$ make filename
$ ./filename
Especially because I’m dyslexic and often misspell my filename. I’m therefore looking for a faster way to execute my code using a makefile. Something like the extension code runner which I used before taking the class were all I had to do was hit ctrl + alt + n.
You can have a target to run your code: Imagine this simple Makefile:
my_prog_objs = a.o b.o c.o
my_prog_args = a b c
my_prog_out = my_prog.out
.PHONY: all clean run
all: my_prog
clean:
rm -f my_prog $(my_prog_objs)
# THIS IS THE IMPORTANT TARGET, first builds the
# program, then runs it.
run: my_prog
my_prog $(my_prog_args) >$(my_progs_out)
my_prog: $(my_prog_objs)
$(CC) $(CFLAGS) $(LDFLAGS) -o $# $($#_objs)
by using make you get everything built, but running make run you build everything and run my_prog with the arguments shown in the Makefile (a b c) and the output redirected to my_prog.out and this will save a lot of typing if you need to test it.
Till now, I was using the following makefile that I have generated somehow for my school projects:
my makefile
But now I have a different situation: I am supposed to compile 4 programs for one project, while part of the code is supposed to be compiled as .so, for use for the 4 programs.
like described here:
1 - all the parts that are supposed to be compiled together as one .so file, using for example:
gcc -shared -fPIC src/file1.c src/file2.c src/file3.c -o libutils.so
3,4,5 should be compiled and linked together with this .so file, using for example:
gcc src/file4.c -L'pwd' lutils -o file4.out
the same way for all the 3 projects, and one more simple compilation of project 2.
I wandered across the net, google, your site, etc.
tried to find a solution for this situation,
without any luck.
already seen solutions like this one:
solution example
where you supply makefile with the details of the entire project structure.
I thought about dividing all the files into 4 folders, below the main folder, and creating a loop inside makefile that will compile each program in each cycle, with "if" statements to make a different compilation, according to the index. but I had no luck, it seems very complicated (maybe someone can show me a solution like that one...).
I am wondering if there is a way of making this whole compilation process generic and automatic like the current file (maybe little less),
if there is a way, I would like to study and discover it.
thank you in advance!!!
Arie
Since you have a nicely drawn tree of dependencies, you "just" need to translate this into a Makefile.
You might like to start with this:
.PHONY: all
all: reloader.exe block_finder.exe formatter.exe printdb.exe
MODULES = reloader block_finder formatter printdb linked_list bitcoin file_handler
SRCS = $(MODULES:%=%.c)
reloader.exe block_finder.exe formatter.exe printdb.exe: libbitcoin_manager.so
reloader.exe: reloader.o
block_finder.exe: block_finder.o
formatter.exe: formatter.o
printdb.exe: printdb.o
libbitcoin_manager.so: linked_list.o bitcoin.o file_handler.o
gcc -shared -fPIC $^ -o $#
%.exe: %.o
gcc $< -L. -lbitcoin_manager -o $#
%.o: %.c
gcc -c $< -o $#
%.d: %.c
gcc -MM -MT $# -MT $*.o -MF $# $<
include $(SRCS:%.c=%.d)
Because you don't have a loop in the diagram, you don't need a loop in the Makefile. Instead you put all dependent files on the left of a colon and the file they depend on on the right.
You might like to collect more "objects" in variables, for example the programs to build, the modules in the library, and so on.
I have also used a common pattern to generate the dependencies from the header files. The way shown is just one way to do it. It uses files with a ".d" extension, for "dependency." GCC has options to build these files, it scans the source and collects all included headers even if "stacked."
For example, "bitcoin.d" looks like this:
bitcoin.d bitcoin.o: bitcoin.c bitcoin.h linked_list.h definitions.h \
file_handler.h
The re-generate the dependency file on changes in the sources it is also a target, not only the object file.
EDIT:
First, using directories makes Makefiles more difficult. I don't like such structures not only for that reason, but also because they separate header files and implementation files that clearly belong to each other.
Anyway, here is an enhanced Makefile:
.PHONY: all
SRCDIR = src
INCDIR = include
BLDDIR = build
APPS = reloader block_finder formatter printdb
MODULES = reloader block_finder formatter printdb linked_list bitcoin file_handler
LIBNAME = bitcoin_manager
LIBMODULES = linked_list bitcoin file_handler
VPATH = $(SRCDIR)
SRCS = $(MODULES:%=%.c)
LIB = $(LIBNAME:%=lib%.so)
#win LIB = $(LIBNAME:%=%.lib)
EXES = $(APPS:%=%.exe)
all: $(BLDDIR) $(EXES)
$(BLDDIR):
mkdir $#
$(LIB): $(LIBMODULES:%=$(BLDDIR)/%.o)
gcc -shared -fPIC $^ -o $#
$(EXES): $(LIB)
$(EXES): %.exe: $(BLDDIR)/%.o
gcc $< -L. -l$(LIBNAME) -o $#
$(BLDDIR)/%.o: %.c
gcc -I$(INCDIR) -c $< -o $#
$(SRCDIR)/%.d: %.c
gcc -I$(INCDIR) -MM -MT $# -MT $(BLDDIR)/$*.o -MF $# $<
include $(SRCS:%.c=$(SRCDIR)/%.d)
It uses a lot more variables to simplify renaming and managing a growing library and application.
One important issue is the use of VPATH. This makes make search for sources in the list of paths assigned to it. Make sure you understand it thoroughly, search for articles and documentation. It is easy to use it wrong.
The pattern $(EXES): %.exe: $(BLDDIR)/%.o is a nice one. It consists of three parts, first a list of targets, second a generic pattern with a single target and its source. Here is means that for all executables each of them is built from its object file.
Now to your questions:
Is answered by the new proposal. I didn't add the directory but use VPATH.
Make stopped not because the exe-from-o pattern was wrong, but because it didn't find a way to build the object file needed. This is solved by the new proposal, too. To find out what happens if you delete these 4 recipes in the old proposal: you can experiment, so do it!
The dot is, like user3629249 tried to say, the present working directory. You had it in your Makefile with 'pwd' and I replaced it. This is not special to make, it is common in all major operating systems, including Windows. You might know .. which designates the parent directory.
When make starts it reads the Makefile or any given file. If this file contains include directives the files listed are checked if they need to be rebuild. make does this even if you call it with -n! After (re-)building all files to be included they are included finally. Now make has all recipes and continues with its "normal" work.
I have a python file that I want to run on the board. Hence I want to embed the python interpreter (written in C) in the board. I managed to write separate C project that runs the Python file. It compiles and runs as I want to. Here's the makefile for same:-
CC=gcc
CFLAGS=-I python3.5 -I config -I . -c -w
LDFLAGS= -lpython3.5m -lpthread -ldl -lutil -lm -Xlinker -export-dynamic -Wl,-O1 -Wl,-Bsymbolic-functions
all: classifier trainer test link
test:
$(CC) $(CFLAGS) test.c
trainer: Trainer.c
$(CC) $(CFLAGS) Trainer.c
$(CC) Trainer.o $(LDFLAGS) -o Trainer
.ONESHELL:
classifier: Classifier.c
$(CC) $(CFLAGS) Classifier.c
# $(CC) Classifier.o $(LLFLAGS) -o Classifier
link:
$(CC) test.o Classifier.o $(LDFLAGS) -o test
clean:
rm -f Trainer.o Trainer Classifier.o Classifier
http://dpaste.com/3BCY2RE is my entire directory of project "hello" (It is not the one from the examples).
I included "Classifier.h" in my "hello.c" and I am getting the following errors: http://dpaste.com/3KKCF84
Compiler include options (No preincludes):
"${CG_TOOL_ROOT}/include"
"${workspace_loc:/${ProjName}/TerrainPredict}"
"${workspace_loc:/${ProjName}/TerrainPredict/config}"
"${workspace_loc:/${ProjName}/TerrainPredict/python3.5}"
"${SW_ROOT}/examples/boards/ek-tm4c1294xl"
"${SW_ROOT}"
Linker file search paths:
"libc.a"
"${workspace_loc:/${ProjName}/TerrainPredict/libterrainclf.a}"
"${SW_ROOT}/driverlib/ccs/Debug/driverlib.lib"
and:
"${CG_TOOL_ROOT}/lib"
"${workspace_loc:/hello/TerrainPredict/libterrainclf.a}"
"${CG_TOOL_ROOT}/include"
Am I wrong with some of my configurations? Or is this some problem with python interpreter? Any help is greatly appreciated
EDIT:-
As #KevinDTimm suggested, the problem is that there is no pyconfig.h for my environment. This file is required by python to define important variables like source of system clock. I tried removing safety checks in existing pyconfig.h. The first error I am getting is in pytime.h as :
"_PyTime_t need signed 64-bit integer type"
Which was further because of the following code block:
#ifdef PY_INT64_T
/* _PyTime_t: Python timestamp with subsecond precision. It can be used to
store a duration, and so indirectly a date (related to another date, like
UNIX epoch). */
typedef PY_INT64_T _PyTime_t;
#define _PyTime_MIN PY_LLONG_MIN
#define _PyTime_MAX PY_LLONG_MAX
#else
# error "_PyTime_t need signed 64-bit integer type"
#endif
It appears to me that it needs a variable that stores time. I need help in assigning that variable.
From the linked problem
The multiarch error message is a bit misleading. It's not failing because there's a multiarch problem, it's failing because there's a multi-OS problem. /usr/include/python*/pyconfig.h is trying to figure out where to find the real pyconfig.h from, and since it doesn't know, it's bailing out.
You essentially need a pyconfig.h generated for the target environment. I don't know what produced pyconfig.h, perhaps building cython from source? pyconfig.h looks like something generated by gnu autoconf, so there should not be any big problems in generating it.
I encountered this problem while installing some python modules in which had dependencies on their own C libraries. The problem is, cc is not looking into /usr/local/include at all for header files. I made it work for one of those (thinking it was a problem of the modules) by adding /usr/local/include as one of the external include directories.
Then, to test, I wrote a simple hello.c file and added #include "fftw3.h" / #include <fftw3.h> and it failed to compile if I didn't explicitly add -I/usr/local/include.
I added a line in my ~/.bash_profile to export the include the directory path to $PATH; didn't work either.
So, my question is, how do I make cc look for header files in /usr/local/include (or, for that matter, in any custom directory) always without passing -I flag?
FYI: I'm using macbook pro running OSX 10.11
If you are using GCC then you have three environment variables you can use:
CPATH
C_INCLUDE_PATH
CPLUS_INCLUDE_PATH
Take a look here.
EDIT: since you specified you are working with OS X (hence Clang), they should be supported too, take a look ad the end here. It's not uncommon to have Clang mimic GCC specs just to help in compatibility.
I think you should invest some time in understanding build systems. For example gnu make. Here, look at this:
CC = gcc
CFLAGS = -Wall
DEPS = primes.h
OBJ = go.o primes.o
%.o: %.c $(DEPS)
$(CC) $(CFLAGS) -c -o $# $<
go: $(OBJ)
gcc $(CFLAGS) -o $# $^
This gives you:
The freedom to add any compiler you want. In your case that would be cc, in this example it is gcc.
use cflags to control to adjust the compiler - in the example -Wall will turn on the warnings
make your build work reproducible
prepare recipe with complex rules for compilation as your application grow
More information is available here.
I have downloaded the cmocka example files and followed all the instructions. All test files were succesfully generated and I can run them, but no output appears in the console. I have tried to alter the CMOCKA_MESSAGE_OUTPUT environmental variable, tried to write my own tests and compile them, tried to recompile and reinstall cmocka several times - nothing made the tests output anything. I work on Windows 7 32-bit, so I figured to try also cygwin, but cygwin just throws that it cannot find public libraries, so I abandoned this fork of my research - after all cmocka should also normally work in windows cmd. Does anyone know how to make the tests output anything to the console?
EDIT
I'm adding my make info in case there was some problem with compilation/linking, although I don't see any (it doesn't produce any error and outputs correctly the tests.exe file):
makefile
OBJ_DIR = obj
HDR = $(wildcard *.h)
SRC = $(HDR:.h=.c)
OBJ = $(HDR:%.h=$(OBJ_DIR)\\%.o)
CC = gcc
CFLAGS = -I"C:\Program Files\cmocka\include" -I"C:\Program Files\cmocka\lib" -I"C:\Program Files\cmocka\bin" -llibcmocka -lcmocka
.PHONY: all clean
all: tests.exe
$(OBJ_DIR)\\%.o: %.c %.h
$(CC) $< -c -o $# $(CFLAGS)
$(OBJ_DIR)\tests.o: tests.c
$(CC) $< -c -o $# $(CFLAGS)
tests.exe: $(OBJ) $(OBJ_DIR)\tests.o
$(CC) $^ -o tests.exe $(CFLAGS)
clean:
del $(OBJ) $(OBJ_DIR)\tests.o tests.exe
note1: the numerous paths in cflags are put out of desperation - at first I had been using only the first one.
note2: when I try to run this script in Netbeans or cygwin I change del to rm -f and switch slashes. The output is like described above: the make is done without any errors and outputs the tests.exe, but once executed, it throws error about not being able to find public libraries.
The symbol is not exported, see https://git.cryptomilk.org/projects/cmocka.git/commit/?id=7364469189558a8720b60880940a41e1a0d20452
Sorry for digging out this old thread, but i recently stumbled over exactly the same problem. Compiled everything by myself with meson/ninja and did not get any output neither from the test itself, nor from a simple printf.
I solved the problem by using the precompiled library from here.
Just install/start MSYS2 and use
for 64-bit MINGW:
pacman -S mingw-w64-x86_64-cmocka
for 32-bit MINGW:
pacman -S mingw-w64-i686-cmocka
Then I recompiled my hello world test, and output worked as intended.
I had the same problem, and for me it was that I had not properly passed the state argument to the tests. My tests had this signature:
void test_something() { /* ...snip... */ }
but it should have been
void test_something(void **state) {
(void) state; /* unused */
/* ...snip... */
}
After fixing this, the output properly appeared.
your problem is in the tests.c that has the unit tests, not your setup. Show us your tests.c file where you wrote your unit tests.
I have had the same problem.
Especially I also used gcov to see the coverage and it claims that nothing gets ever executed.
My solution was that I just forgot to add cmocka to my environment-path. After adding "cmocka.dll" to the path everything finally works.