How to set up multiple C files to coexist - c

How would I set up a structure such that I have methods in
helper.c
main.c
main.h
... how can I include helper.c in my main.c and use the methods built in helper.c?
I am running makefile as:
all:
gcc -o main main.c
gcc -o helper helper.c
clean:
rm -f main
rm -f helper
I understand I need a helper.h, but how do I properly set that up.. say I'd like my helper file to look like this:
struct Node{
struct Node* nxt;
int x;
};
int isThere(struct Node *head, int value){
if(head==NULL){
return 0;
}
struct Node *tmp=head;
while(tmp!=NULL){
if(tmp->x==value){
return 1;
}
tmp=tmp->nxt;
}
return 0;
}
struct Node *nodeInsert(struct Node *head, int value){
if(head==NULL){
head=malloc(sizeof(struct Node));
head->x=value;
head->nxt=NULL;
printf("inserted\n");
return head;
} else if(head!=NULL && isThere(head,value)==1){
printf("duplicate\n");
return head;
} else{
struct Node *new;
struct Node *tmp=head;
while(tmp->nxt!=NULL){
tmp=tmp->nxt;
}
new=malloc(sizeof(struct Node));
new->x=value;
tmp->nxt=new;
new->nxt=NULL;
printf("inserted\n");
return head;
}}

I think the problem is that you miss understanding compiling and linking in C.
There is a lot of source to explain this, here is a good one : http://courses.cms.caltech.edu/cs11/material/c/mike/misc/compiling_c.html
what you should do is compile all of them to object files then link them together.
you can do this in single command
gcc -o executable main.c helper.c
or compile each one first then link them together
gcc -c main.c
gcc -c helper.c
gcc -o executable main.o helper.o
Make sure you write prototypes for all functions of helper.c in helper.h
and include helper.h at the beginning of main.c

gcc -o helper helper.c would attempt both compilation and linking, but since helper.c
doesn't define a main(), it won't link.
What you want to do is simply compile main.c and helper.c separately into object files:
gcc -c main.c #-o main.o (the -o main.o part is implied if missing)
gcc -c helper.c #-o helper.o
and then link the resulting object files into the final executable.
gcc -o main main.o helper.o
As for the header: helper.c defines struct Node and methods nodeInsert and isThere. In order to use these properly, main needs their prototypes, so the standard way to provide them to it would be to define a helper.h header:
#ifndef HELPER_H
#define HELPER_H /*header guard to protect against double inclusion*/
struct Node{
struct Node* nxt;
int x;
};
int isThere(struct Node *head, int value);
struct Node *nodeInsert(struct Node *head, int value);
#endif
and include it at the top of main.c:
#include "helper.h"
//...
(You can also include it in helper.c. That should allow the compiler to help you catch possible erroneous
inconsistencies.)

Change your makefile so that all the .c files that are supposed to be in the binary are referenced:
all:
gcc -o main main.c helper.c
Also, your code in main.c needs to know the method declarations in helper.c, which is why the struct declarations and the function declarations for the code in helper.c should be in main.h (or in helper.h and included in main.h)

I would like to add to #John Weldon answer that you can include your helper.c in your main.c directly and declare functions in it as static like this example:
// main.c
#include "helper.c"
int main(void)
{
HelloWorld();
return 0;
}
// helper.c
#include <stdio.h>
static void HelloWorld(void)
{
puts("Hello World!!!");
}
And in your Makefile you compile helper.c and main.c together like:
gcc -o main main.c helper.c

Related

Why do we specify header files for targets in Makefile?

I'm trying to understand Makefile. I understand the hole concept of the dependency tree, but why do we include .h files for the .o targets that include certain header file. Here is an example:
// main.c
#include <stdio.h>
#include "math.h"
int main() {
printf("%d\n", magic_number);
printf("%d\n", square(5));
}
// foo.h
const int magic_number = 10;
int square(int);
// foo.c
int square(int value) {
return value*value;
}
# -*- Makefile -*-
all: main
main: main.o foo.o
gcc main.o foo.o -o main
main.o: main.c #foo.h <-----------
gcc -c main.c
foo.o: foo.c
gcc -c foo.c
I commented the part out where I would add the header file. Because that's my question, why does it have to be there? I did some testing and added some more const int variables to the header. Just looking at the Makefile (with commented out foo.h) it would only recompile the main.c file. There it would then access a variable from the header. So why does it look into the header file even thought it's not in the Makefile?
Makefiles track dependencies - in this case it is an explicit dependency.
Specifying that main.o depends on both main.c and foo.h means that any time either file changes, main.o will be rebuilt.
If you didn't have that explicit dependency chain, then you could change foo.h in a way which renders main.c uncompilable, but Make wouldn't know about it so would not rebuild it when it should.
There are tons of ways that a header file can change in such a way that a source file doesn't need to also change, but yet the program is malformed if the source file is not rebuilt. Here's one simple example: say we have this source:
// foo.h
struct foo {
int f;
};
int getfoo(const struct foo* f);
// foo.c
#include "foo.h"
int getfoo(const struct foo* f)
{
return f->f;
}
// main.c
#include "foo.h"
int main()
{
struct foo f = {1};
return getfoo(&f);
}
You compile everything and all is well. Now suppose you modify foo.h like this:
// foo.h
struct foo {
const char* val; // added value
int f;
};
int getfoo(const struct foo* f);
And you modify main.c like this:
// main.c
#include "foo.h"
int main()
{
struct foo f = {"hello", 1};
return getfoo(&f);
}
Now you run make and since you've modified main.c it is recompiled, but since you haven't modified foo.c it is not recompiled.
Now your program will certainly return a bogus value since foo.o thinks that the structure it was passed contains just a single integer, but the structure it was really passed from main() actually has a pointer plus an integer.

linking with wrap directive for unit testing can be used to redefine a subroutine?

I'm using Cmocka to write a unit testing suite for a shared object written in C, but I'm having some issue. Since I cannot share the source code, I have written a minimum "not-working" example to show what is the issue:
my program is composed 5 files: foo.c, foo.h bar.c bar.h main.c.
bar.* files define a bar() function, which simply returns the argument multiplied by 2
foo.* files declare a foo() function that uses the bar() function defined by bar.h
main.c contains a simple cmocka test and a __wrap_bar()
function, returning the argument multiplied by 3.
I compile the program by producing a libfootest.so object (foo+bar) and then I link this object with main.o passing the -Wl,--wrap=bar flag to the compiler. In this configuration libfootest is the module under test and main is the tester program. I expect the __wrap__bar to be called (failing the test), but the standard bar() is called(test is passed). How can I solve this problem? Below you find all the code I'm using.
bar.c:
#include "bar.h"
int bar(int val) {
return val*2;
}
bar.h:
int bar(int val);
foo.h:
#include <stdio.h>
int foo(int val);
foo.c:
#include "foo.h"
#include "bar.h"
int foo(int val) {
int ret;
ret = bar(val);
printf("RET: %d", ret);
return ret;
}
main.c:
#include <stdio.h>
//required include for CMOCKA
#include <stdarg.h>
#include <stddef.h>
#include <stdint.h>
#include <setjmp.h>
#include <cmocka.h>
//library under test
#include "foo.h"
int __wrap_bar(int val) {
return 3*val;
}
static void test_foo(void **state) {
int ret = foo(5);
assert_int_equal(ret, 10);
}
int main (int argc, char** argv) {
const struct CMUnitTest tests[] = {
cmocka_unit_test(test_foo),
};
return cmocka_run_group_tests(tests, NULL, NULL);
}
Makefile:
CMOCKA_LIB_DIR=../../cmocka-1.1.5/build/src
CXXFLAGS+=-g -Og -fPIC
CFLAGS+=-g -Og -std=c99 -fPIC
CC=gcc
CXX=g++
all: main.o ./libfootest.so
gcc -o linux-test -g -L. -L$(CMOCKA_LIB_DIR) $(filter %.o, $^) -lcmocka -lfootest -Wl,-rpath=. -Wall -Wl,--wrap=bar -Wl,-rpath=$(CMOCKA_LIB_DIR)
./libfootest.so: foo.o bar.o
$(CC) -shared -o $# -g $^ -pedantic -Wall
clean:
rm -f *.o
rm -f *.so
The problem is your build of the library. You don't create a link library as commonly done, with separated modules. Instead you link all given modules and place the resulting single module in the target library.
That's why the linker resolved the call to bar() already, and it is no longer unresolved when linking the test program.
The option --wrap works only for unresolved references between modules.
The solution is to build the library from separated modules. Use the tool ar for this:
ar r libfootest.a foo.o bar.o

C Function Multiple Definition Error but declared once?

I've been starting a small project in C with the purpose to create a library of my own to use single linked lists without hassle.
I use three files for the project: the main file (main.c), a header (with a header guard, sll.h) and it's declaration + definition file "sll.c"
However, I ran into three errors when trying to compile the C program in this state:
FILE 1 main.c
#include <stdio.h>
#include <stdlib.h>
#include "sll.h"
int main(){
return 0;
}
FILE 2 sll.h
#ifndef SLL_H_11_14_2019
#define SLL_H_11_14_2019
typedef struct Node {
int data;
struct Node * next;
} Node;
typedef struct List {
int size;
Node * head;
} List;
List createSLList();
void appendSLList(List list);
int searchSLList(List list, int key);
void popSLList(List list);
void removekeySLList(List list, int key);
void removeSLList(List list, int index);
void deleteList(List list);
#endif
FILE 3 sll.c
#include <stdio.h>
#include <stdlib.h>
#include "sll.h"
/* Contains source code for sll.h. Should not be used standalone without sll.h to avoid duplication issues. */
/* This library provides definition for functions to create and manipulate a single linked list. */
List createSLList(){
List list;
list.size = 0;
list.head = NULL;
return list;
}
void appendSLList(List list);
int searchSLList(List list, int key);
void popSLList(List list);
void removekeySLList(List list, int key);
void removeSLList(List list, int index);
void deleteList(List list);
The errors:
sll.o In function `createSLList':
7 sll.c multiple definition of `createSLList'
main.o sll.c:17: first defined here
collect2.exe [Error] ld returned 1 exit status
25 Makefile.win recipe for target 'sll.exe' failed
I'm honestly baffled; createSLList is only defined once. I looked and (obviously) stdio.h and stdlib.h should not be throwing any errors, considering they've got header guards of their own, so what's the catch?
Edit: Compiled with GCC 4.9.2 64-bit Debug
with Language Standard C99 & all warnings enabled + making them fatal errors.
Edit2: In hindsight, I did #include "sll.h". Edited and posted the new code, yet the error remains. Posted makefile.win too.
//makefile.win
# Project: single linked list
# Makefile created by Dev-C++ 5.11
CPP = g++.exe -D__DEBUG__
CC = gcc.exe -D__DEBUG__
WINDRES = windres.exe
OBJ = main.o sll.o
LINKOBJ = main.o sll.o
LIBS = -L"C:/Program Files (x86)/Dev-Cpp/MinGW64/lib" -L"C:/Program Files (x86)/Dev-Cpp/MinGW64/x86_64-w64-mingw32/lib" -static-libgcc -g3
INCS = -I"C:/Program Files (x86)/Dev-Cpp/MinGW64/include" -I"C:/Program Files (x86)/Dev-Cpp/MinGW64/x86_64-w64-mingw32/include" -I"C:/Program Files (x86)/Dev-Cpp/MinGW64/lib/gcc/x86_64-w64-mingw32/4.9.2/include"
CXXINCS = -I"C:/Program Files (x86)/Dev-Cpp/MinGW64/include" -I"C:/Program Files (x86)/Dev-Cpp/MinGW64/x86_64-w64-mingw32/include" -I"C:/Program Files (x86)/Dev-Cpp/MinGW64/lib/gcc/x86_64-w64-mingw32/4.9.2/include" -I"C:/Program Files (x86)/Dev-Cpp/MinGW64/lib/gcc/x86_64-w64-mingw32/4.9.2/include/c++"
BIN = sll.exe
CXXFLAGS = $(CXXINCS) -std=c99 -Wall -Werror -g3
CFLAGS = $(INCS) -std=c99 -Wall -Werror -g3
RM = rm.exe -f
.PHONY: all all-before all-after clean clean-custom
all: all-before $(BIN) all-after
clean: clean-custom
${RM} $(OBJ) $(BIN)
$(BIN): $(OBJ)
$(CC) $(LINKOBJ) -o $(BIN) $(LIBS) //ERROR IS HERE
main.o: main.c
$(CC) -c main.c -o main.o $(CFLAGS)
sll.o: sll.c
$(CC) -c sll.c -o sll.o $(CFLAGS)
The problem was Dev-C++ initially compiled my program as a C++ program, even though I had "picked" C as the default language. As a result, this had destructive consequences.
For anyone facing similar problems, what worked for me was deleting all the files created by the compiler and restarting the IDE. Please note that you should manually set what compiler to use under "project settings".

Linking to an executable on OSX

On Windows it is possible to dynamically link to an executable with exported symbols. For example following code:
// main.c
void __declspec(dllexport) interface_function() {}
int main() {}
// ext.c
void interface_function();
void extension_function() {
interface_function();
}
With
cl.exe main.c
cl.exe ext.c /LD /link main.lib
would produce an executable main.exe, a static library main.lib for implicit linking, and a dynamic library ext.dll.
Similar behavior can be achieved in OSX with shared libraries:
// main.c
void interface_function() {}
int main() {}
// ext.c
void interface_function();
void extension_function() {
interface_function();
}
With
gcc main.c -o main
gcc ext.c -bundle -bundle_loader main -o ext.bundle
it is virtually equivalent to the Windows setup.
But for dynamiclib:
> gcc ext.c -dynamiclib -o ext.dylib
and shared:
> gcc ext.c -shared -o ext.so
I cannot get them to work because of undefined symbols on one hand and unable to load an executable with -l flag on the other.
I can let them resolve undefined symbols in runtime with -undefined dynamic_lookup. But this is not a sustainable way because all the link errors are now happening in run-time instead.
Is there a way to provide the list of symbols to dynamically load from an executable when linking as -shared and -dynamiclib?
Yes this is possible, but then you'll want to create a bundle rather than a shared library (see this answer for more detail).
If you have a main application like so:
#include <stdio.h>
#include <dlfcn.h>
int func(void)
{
return 42;
}
int main(void)
{
void *dl = dlopen("plugin.so", RTLD_LOCAL);
if(!dl) return -1;
int (*derp)(void) = dlsym(dl, "derp");
if(!derp) return -1;
printf("derp(): %i\n", derp());
return 0;
}
clang -o main main.c -Wall -Wl,-export_dynamic
Then you can compile bundles against it like so:
int func(void);
int derp(void)
{
return -func();
}
clang -o plugin.so plugin.c -Wall -bundle -bundle_loader ./main

multiple definition in g++?

The code is as follows:
global.h
#ifndef GLOBAL_H
#define GLOBAL_H
#include <stdio.h>
int test;
void test_fun(void);
#endif
global.c
#include "global.h"
void test_fun()
{
printf("%d\n", test);
}
main.c
#include "global.h"
int main(void)
{
test_fun();
test = 1;
printf("%d\n", test);
}
Makefile using gcc compiler
main: main.o global.o
gcc -o main main.o global.o
main.o: main.c global.h
gcc -c main.c
global.o: global.c global.h
gcc -c global.c
clean:
rm -f global.o main.o main
This works well.
However, when I change my code to C++, as follows:
global.h
#ifndef GLOBAL_H
#define GLOBAL_H
#include <iostream>
int test;
void test_fun(void);
#endif
global.cpp
#include "global.h"
void test_fun()
{
cout << test
}
main.cpp
#include "global.h"
int main(void)
{
test_fun();
test = 1;
std::cout << test;
}
Makefile using g++ compiler
main: main.o global.o
g++ -o main main.o global.o
main.o: main.cpp global.h
g++ main.cpp
global.o: global.cpp global.h
g++ global.cpp
clean:
rm -f global.o main.o main
The code above throws the output:
global.o:(.bss+0x0): multiple definition of `test'
What makes the different here?
You've int test; in a header which is included in 2 TUs, hence the error. Both the translation units main.c (or .cpp depending upon the compiler used) and global.c have global.h included, which leads to two definitions of the same variable in two object files, thus the linker error.
Pass test as an arguement to test_fun, thereby avoiding the usage of a global.
If you absolutely have to share the variable between the TUs, then remove int test; from global.h and in main.cpp do
int test;
and in global.cpp do
extern int test;
As an aside, since it's a global variable, test would be initialized to 0 and hence in main when you test_fun();, it should print 0 and then after setting it to 1, it'll print 1.
It's illegal in both C and C++ from a language standpoint, but as for why it works with a C compilers (like GCC) is because they implement a common extension, a legacy cruft.
... You are using a different programming language

Resources