How to replace LD variable in a Makefile to link C objects - c

I'm writing a Makefile for C. I want be able to specify different programs for compilation and linking via environmental variables. However, I want it works without any additional variables too. I was trying to link with ld. However, the default doesn't link with standard C library.
The question:
How to link C program with ld or $LD
Is it possible to get appropriate flags from cc?
I cannot use $(CC) in place of $(LD). The LD ?= cc doesn't work too.
I want something like this to be true:
Environment variable CC set to tcc.
Environment variable LD unset.
My Makefile compile using tcc and link using system default linker for C.
Unfortunately, some C compilers are unable to link some libraries. I have this problem with tcc and glfw.
P.S.
Linux user

The conditional assignment $(LD) ?= cc can not work, since $(LD) is predefined.
If you want to start make without predefined variables, use the option -R:
> make -p | grep LD
...
LD = ld
...
> make -p -R | grep LD
>

Instead of using ld as the linker, use gcc or g++. They add the appropriate command line options for getting libraries and startup code, etc. In other words:
ld -o main main.o
is equivalent to:
gcc -o main main.o
except that gcc adds all the command line parameters when it calls ld.
In other words: LD=gcc.

One of the main features of tcc is that :
tcc is a compiler and a linker [...] it compile and execute C source directly. No linking or assembly necessary
There's more detail in tcc documentation, it says about linking that :
Dynamic ELF libraries can be output but the C compiler does not generate position independent code (PIC). It means that the dynamic library code generated by TCC cannot be factorized among processes yet.
It means that, if you want to use tcc, you'll need to link with tcc.

As I see, you can define rule-dependent values for variables.
build_with_tcc: CC=tcc compile_tcc
compile_tcc:
## Commands to do a full build with tcc.
build_with_gcc: CC=gcc LD=g++ link_gcc
compile_gcc:
## Commands to compile with gcc.
link_gcc: compile_gcc
## Commands to link with g++.
And build it by calling the appropriate rule.
If you wish, in other hand, to be able to pass an arbitrary compiler toolchain, you will have to have some resctrictions anyway.
The rule:
build_with_arbitrary: compile_arbitrary link_arbitrary
Implies that your build must be done in two steps and the respective rules (compile_arbitrary and link_arbitrary) must obey the same commandline.
So you can invoke make with custom CC and LD variables:
CC=any_cc LD=any_ld make build_with_arbitrary
Lastly, you can add a dirty check for LD being empty in the linker step, and only perform it if not.
link_arbitrary:
[ -n "$(LD)" ] && do_linker_stuff
So you could use build_with_arbitrary even for a compiler that does everything in a single step just by passing:
CC=any_cc LD= make build_with_arbitrary
I hope to have correctly understood your question. Sorry if I misunderstood, and please tell me were I am wrong.

Related

Compiling cmocka on windows

I'm trying to compile a simple unit test on my windows machine.
When I'm trying to compile my test I'm using the shared library flag.
gcc -c -L./bin/ -lcmocka .\Test.c .\src\some_module.c
gcc .\Test.o .\some_module.o -o main
But the second line throws this error:
undefined reference to `_cmocka_run_group_tests'
However, if I'm compiling using directly the cmocka.c file which I downloaded from their git it works fine:
gcc -c .\lib\cmocka.c .\Test.c .\src\some_module.c
gcc .\Test.o .\some_module.o .\cmocka.o
What am I doing wrong in the first compilation?
In addition, I would happy to understand the difference between the two compilations. Which one is the better practice?
Thank you
In order to compile your code, the compiler does not need to know where to look for the library. It's enough if the compiler "finds" the declarations of the functions which are usually in the header files provided by the library.
This step is done in the first line of your compilation procedure (maybe you need to specify the folder to the header files by adding -Ipath/to/headers/):
gcc -c .\Test.c .\src\some_module.c
The library itself is "combined" with your code during the linking step, which is done during your second compilation step. Here you need to specify the library (and its path via -Lpath/to/library, if the linker does not find the library on its own):
gcc .\Test.o .\some_module.o -o main -L./bin/ -lcmocka
You should definitely not use your second approach and compile the library by yourself.

OCaml: Issues linking C and OCaml

I am able to wrap C code and access it from the OCaml interpreter, but cannot build a binary! I'm 98% sure it is some linking problem, but can't find the tools to explore the linkage.
Getting even to this point was a chore, (endless quantities of Error: The external function is not available messages) so I'll document everything I did.
A 'system' file stuff.c
#include <stdio.h>
int fun(int z) // Emulate a "real" subroutine
{
printf("duuude whoa z=%d\n", z);
return 42;
}
Compile above as
cc -fPIC -c stuff.c
ld -shared -o libstuff.so stuff.o
An OCaml wrapper around above, in ocmstuff.c:
#include <caml/mlvalues.h>
CAMLprim value yofun(value z) {
return Val_long(fun(Long_val(z)));
}
Build above as
cc -fPIC -c ocmstuff.c
ld -shared -o dllostuff.so ocmstuff.o -L . -lstuff -lc -rpath .
Yes, the rpath really is needed, else the next steps suffer. (Edit: If you don't use rpath, you'll need to use LD_LIBRARY_PATH=. instead. For the final 'production' version, you'd change the rpath to the actual library path, or do ld.so.conf trickery or install into 'standard' locations, or tell your users about LD_LIBRARY_PATH. This is just like what you'd do for any other system. The rpath solution seems to be the most stable and reliable solution.)
Next, a module declaration, stored in fapi.mli
module Fapi : sig
external ofun : int -> int = "yofun" ;;
end
Build above as:
ocamlc -a -o fapi.cma -intf fapi.mli -dllib -lostuff
Does it work? Yes it does:
$ rlwrap ocaml fapi.cma
OCaml version 4.11.1
open Fapi ;;
Fapi.ofun 33 ;;
duuude whoa z=33
- : int = 42
#
So the wrapper works fine. Now lets compile with it. Here's myprog.ml:
open Fapi ;;
Fapi.ofun 33 ;;
Compile it:
ocamlc -c myprog.ml
ocamlc -o myprog myprog.cmo fapi.cma
The very last command spews:
File "_none_", line 1:
Error: Required module `Fapi' is unavailable
I am 98% sure the above error is due to some silly linking error, but I cannot track it down. Why do I think this? Well, here's a related problem that provides a hint.
$ rlwrap ocaml
open Fapi ;;
# Fapi.ofun 33 ;;
Error: The external function `yofun' is not available
#
Well, that's odd. It clearly must have found fapi.cma because that is the only way it can know about yofun. But somehow, it doesn't know it needs to dig into dllostuff.so for that. Or possibly dllostuff.so is failing to correctly link/load libstuff.so ? Or maybe libc.so to get printf ? I'm pretty sure its one of these last few, but I just can't get it to work, and don't have the tools to debug it. (nm and ldd -r look healthy. Are there some similar tools for the assorted cma,cmo,cmi,cmx files?)
Interfacing with C is much easier if you use dune. You don't need to know the low-level details it is all handled for you.
Now, back to your example. This is definitely not how OCaml users are interfacing with C, but if you really want to learn about it here are a few notes.
The reason why you have the error is that:
you specified modules in an incorrect order, it should be topological, not reverse topological order, i.e., the dependency comes before dependent
you do not have the .ml file (the -intf option means very different)
The reason why the last snippet doesn't work is because you're not loading the library. The ocaml binary obviously doesn't have any fapi units linked into it, so you have to explicitly load it using either #load directive or by passing it in the command line.
Also the following line is not necessary,
ld -shared -o dllostuff.so ocstuff.o -L . -lstuff -lc -rpath .
First of all, there is no need to link a stub file into a shared library. It is counterproductive and doesn't really bring you anything. Second, passing -rpath . will render the end executable unusable, unless the shared objects are stored in the same folder as the executable. Just remove this.
Just to complete your exercise, here is how it could be built and run. First, let's fix the stub file. We need the ml file and we also need to remove an extra module definition,
$ cat fapi.{ml,mli}
external ofun : int -> int = "yofun" ;;
external ofun : int -> int = "yofun" ;;
Yes, they are the same. The mli file is not really needed here, but let's keep it for the sake of completeness.
The way how you build the pure C part is fine, as long as you get a relocatable .so file it works.
Now to build the ocstuff.c (which we conventionally call stubs) you just need to do,
ocamlc -c ocstuff.c
Don't turn it into a shared library, don't do anything else with it. Now let's build the fapi library,
ocamlc -c fapi.mli
ocamlc -c fapi.ml
Now let's build the library that contains both OCaml and C code,
ocamlmklib -o fapi fapi.cmo ocstuff.o -lstuff -L.
Now we can finally build the executable,
ocamlc -c myprog.ml
LD_LIBRARY_PATH=. ocamlc -o myprog fapi.cma myprog.cmo
and run it,
LD_LIBRARY_PATH=. ./myprog
duuude whoa z=33
Notice that we have to use the LD_LIBRARY_PATH to tell the system dynamic loader where to look for the external dependency libstuff.so. You can, of course, use rpath to specify its location (pass it to ocamlmklib via -ccopt) but in general it is assumed that the external library is installed at some location that the system loader knows.
Again, unless you're developing your own build system, please use dune or oasis for building OCaml programs. These systems will handle all low-level details in the best possible way.
P.S. It is also worth mentioning that you're not building a binary, but a bytecode executable. For binaries, you will have to use the ocamlopt compiler. And this would be a completely different story. Again, dune is the solution.
There is a lot to take in here, but these lines are suspicious:
ocamlc -c myprog.ml
ocamlc -o myprog myprog.cmo fapi.cma
OCaml expects modules in topologically sorted order, with a module appearing on the command line before the modules that refer to it.
So it would seem the last line should be this:
ocamlc -o myprog fapi.cma myprog.cmo
I hope this helps, it's just a quick response.
The answer provided by ivg works. It also provides enough hints to retrofit the original question to get the correct behavior. The changes to the original recipe are:
Create fapi.mli and fapi.ml which both have the same content: external ofun : int -> int = "yofun" ;;
Compile both the above with ocaml -c. The mli must be compiled first: it yields an interface file cmi which is needed before the ml file can be compiled into it's object file cmo.
The name dllostuff.so was wrong: it must be dllfapi.so to maintain naming consistency.
Build the cma archive/library as ocamlc -a -o fapi.cma fapi.cmo -dllib -lfapi
That's it! Other than these, the original instructions work. The answer from ivg suggests using
ocamlmklib -o fapi fapi.cmo ostuff.o -L. -lstuff
instead of
ld -shared -o dllfapi.so ostuff.o -L. -lstuff
Either of these work. The primary difference is that ocamlmklib also creates a static-linked library libfapi.a. Other than that, it creates the dllfapi.so as before. (That version also contains a motley assortment of typical gcc symbols, for handling exceptions, library ctors, etc. It's not clear why these are needed here, since they'll show up sooner or later anyway.)

Static libraries in Mac OS X

I have a makefile in Mac OS X and the last command line for the final compilation is:
gcc count_words.o lexer.o -lfl -o count_words
but it responds:
ld: library not found for -lfl
collect2: ld returned 1 exit status
I found that the library libfl.a is in /opt/local/lib/ and that modifying the command line to read:
gcc count_words.o lexer.o -L/opt/local/lib/ -lfl -o count_words
it works perfectly, but I've read when a prerequisite of the form -l is seen, GNU make searches for a file of the form libNAME.so; if no match is found, it then searches for libNAME.a. Here make should find /opt/local/lib/libfl.a and proceed with the final action, linking, but this is not happening.
I tried using LD_LIBRARY_PATH, then realized that as I'm working on Mac I have to use DYLD_LIBRARY_PATH, I exported the variable pointing to /opt/local/lib and tried running the makefile again, didn't work. Found another environment variable called DYLD_FALLBACK_LIBRARY_PATH, exported, didn't work.
What should I do?
DYLD_LIBRARY_PATH (and LD_LIBRARY_PATH on other unices) provides search paths for the loader, to resolve linked libraries at runtime. LIBRARY_PATH is the relevant var for providing paths that the compiler will pass to the linker at link time.
However, OS X's linker ld64 has no way to prefer static linking over dynamic in the presence of both kinds of libraries, which means your only option is to pass the full path to the archive anyway.
gcc count_words.o lexer.o /opt/local/lib/libfl.a -o count_words
Which is really all that -l does after it searches the paths and expands the lib name.
make does not search for the library at all. make just invokes other tools that do that. (ld, which is invoked by gcc) All you need to do is pass the proper flags to gcc from make. Possibly, this just means adding
LDFLAGS=-L/opt/local/lib
to your Makefile (or editing the command directly, as it appears you have done during testing), but it is difficult to tell without seeing the Makefile.
Probably this question Library not found for -lfl is relevant. For some reason if you try -ll instead of -lfl it works on OS X. Also see http://linux-digest.blogspot.hk/2013/01/using-flex-on-os-x.html

Linking with another start-up file

I am trying to link a program with my own start-up file by using the STARTUP directive in a LD script:
...
ENTRY(_start)
STARTUP(my_crt1.o)
...
GCC driver is used to link the program (not to bother with library paths like libgcc, etc.):
gcc -T my_script.ld ...
Unfortunately, it only works with a GCC compiled for powerpc targets, while arm or i686 targets don't and still include crt0.o in collect2. For example:
arm-eabi-g++ -v -T my_script.ld ...
gives me:
collect2 ... /opt/lib/gcc/arm-eabi/4.8.0/../../../../arm-eabi/lib/crt0.o ...
and thus:
crt0.S:101: multiple definition of `_start'
It seems the STARTUP directive is totally ignored (the powerpc target uses its default crt0 too unless the STARTUP directive is specified) and there is no way to disable the default crt0.
Is there a portable way to link against another start-up file?
My start-up file uses libgcc functions (to call ctors and dtors) so crtbegin.o, crtend.o, etc. are needed so I would like to avoid the -nostartfiles option which disables crt*.o - I need to disable crt0.o only.
Thank you
I am trying to link a program with my own start-up file ...
GCC driver is used to link the program ...
In that case, you must also supply -nostartfiles flag to GCC.
This limitation indeed forces you to disable the default startup files with -nostartfiles (I prefer -nostdlib). You then need to build by yourself the list of run-time objects. gcc has the option -print-file-name to print the absolute path of libraries it was compiled with (crtbegin.o, crtend.o, libgcc.a...). For example: arm-eabi-g++ <FLAGS> -print-file-name=crtbegin.o
Here is the GNU Make macro I use (providing gcc and cflags):
define m.in/toolchain/gnu/locate =
$(strip
$(shell $(m.in/toolchain/gnu/bin/gcc) $(m.in/toolchain/gnu/cflags) \
-print-file-name=$(m.in/argv/1))
)
endef
crtn := $(call m.in/toolchain/gnu/locate, crtn.o)

Portable way to link statically against one of the libraries

I am creating a utility which depends on libassuan aside other depends. While these ‘others’ provide shared libraries, libassuan comes with static one only.
libassuan comes with simple libassuan-config tool which is meant to provide CFLAGS & LDFLAGS for the compiler/linker to use. These LDFLAGS refer to the library as -lassuan.
The result of standard call of make is then:
cc -I/usr/include/libmirage -I/usr/include/glib-2.0 -I/usr/lib64/glib-2.0/include -lmirage -lglib-2.0 -L/usr/lib64 -lassuan -o mirage2iso mirage2iso.c mirage-getopt.o mirage-wrapper.o mirage-password.o
mirage-password.o: In function `mirage_input_password':
mirage-password.c:(.text+0x1f): undefined reference to `assuan_pipe_connect'
mirage-password.c:(.text+0x32): undefined reference to `assuan_strerror'
collect2: ld returned 1 exit status
make: *** [mirage2iso] Error 1
(I've just started writing this unit and that's why there aren't more errors)
So, if I understand the result correctly, gcc doesn't want to link the app to libassuan.a.
Using -static here will cause gcc to prefer static libraries over shared which is unindented. I've seen solution suggesting using something like that:
-Wl,-Bstatic -lassuan -Wl,-Bdynamic
but I don't think it would be a portable one.
I think the best solution would be to provide full path to the static library file but libassuan-config doesn't provide much of help (all I can get from it is -L/usr/lib64 -lassuan).
Maybe I should just try to create the static library path by ‘parsing’ returned LDFLAGS and using -L for the directory name and -l for the library name — and then hoping that in all cases libassuan-config will return it like that.
What do you think about that? Is there any good, simple and portable solution to resolve the issue?
PS. Please note that although I'm referring to gcc here, I would like to use something that will work fine with other compilers.
PS2. One additional question: if package does install static library only, returning such LDFLAGS instead of full .la path can be considered as a bug?
gcc will link to libassuan.a if it doesn't find libassuan.so
It's probably the order symbols are looked up in the static library when you link. The order matters.
)
Assuming gcc can find libassuan.a and it actually provides the functions the linker complains about, try:
cc -I/usr/include/libmirage -I/usr/include/glib-2.0 -I/usr/lib64/glib-2.0/include -lmirage -lglib-2.0 -L/usr/lib64 -o mirage2iso mirage2iso.c mirage-getopt.o mirage-wrapper.o mirage-password.o -lassuan
Since you say libassuan is under /usr/lib64 it's probably a 64 bit library, are your app and the other libraries 64 bit as well ?
Compiler's command-line options are not a portable thing. There's no standard for it. Every compiler uses its own and several can merely informally agree to comply with each other in command-line format. The most portable way for your linking is to use libassuan-config, of course. I think, it can generate not only flags for gcc, but for other compilers as well. If it can't, then no portable way exists, I suppose (other than CMake or something on higher level).
The command line to cc you shown is totally correct. If you have a static library libassuan.la and path to it is supplied to -L option, then the compiler does link against it. You can see it from its output: has it not found the static library, would it complain with error message like "can't find -lassuan". I
Moreover, if no libassuan.so is found, then compiler links against your library statically, even if you haven't used -Wl,-Bstatic stuff or -static flag.
Your problem may be in persistence of several versions of libassuan in your system. Other that that, I don't see any errors in what you've provided.
Which directory is libassuan.a in
I think the first error is not gcc doesn't want to link the app to libassuan.a it is more gcc does not know where libassuan.a . You need to pass gcc a -L parameter giving the path to libassuan.a .
e.g.
-L /home/path

Resources