I found this for a vanilla case of Bash on GNU/Linux, but what about other shells, other operating systems and other compilers?
How many object files can I pass to a linking task?
Depends on the linker you use.
When using binutils gnu-ld or gold, you can use Windows-style response file, which allows you to bypass command line length limits and pass as many arguments as you need:
echo "foo.o bar.o baz.o ... -lc" > args
gcc main.o -Wl,#args # there is no limit on how big args file is.
Related
I am attempting to compile this code:
#include <GLFW/glfw3.h>
int main() {
glfwInit();
glfwTerminate();
return 0;
}
Using this command in MSYS2 on Windows 10:
gcc -Wall runVulkan.c -o runVulkan
as well as this:
gcc -Wall -Llibs/glfw runVulkan.c -o runVulkan
libs/glfw is where I downloaded the library to.
For some reason I keep getting this:
runVulkan.c:1:10: fatal error: GLFW/glfw3.h: No such file or directory
1 | #include <GLFW/glfw3.h>
| ^~~~~~~~~~~~~~
compilation terminated.
It seems like I'm getting something very basic wrong.
I'm just getting started with C, I'm trying to import Vulkan libraries.
Run pacman -S mingw-w64-x86_64-glfw to install GLFW.
Then build using gcc -Wall runVulkan.c -o runVulkan runVulkan.c `pkg-config --cflags --libs glfw3`.
The pkg-config command prints the flags necessary to use GLFW, and the ` backticks pass its output to GCC as flags. You can run it separately and manually pass any printed flags to GCC.
Note that any -l... flags (those are included in pkg-config output) must be specified after .c or .o files, otherwise they'll have no effect.
For me pkg-config prints -I/mingw64/include -L/mingw64/lib -lglfw3.
-I fixes No such file or directory. It specifies a directory where the compiler will look for #included headers. Though it's unnecessrary when installing GLFW via pacman, since /mingw64/include is always searched by default.
-l fixes undefined reference errors, which you'd get after fixing the previous error. -lglfw3 needs a file called libglfw3.a or libglfw3.dll.a (or some other variants).
-L specifies a directory where -l should search for the .a files, though it's unnecessrary when installing GLFW via pacman, since /mingw64/lib is always searched by default.
#include are just headers, for declarations. gcc, as any compilers, needs to know where those .h should be searched.
You can specify that with -I option (or C_INCLUDE_PATH environment variable).
You'll also need -L option, this times to provide the library itself (.h does not contain the library. Just declarations that the compiler needs to know how to compile codes that use the library function's and types).
-L option tells the compiler where to search for libraries.
But here, you haven't specify any libraries (just headers. And I know that it seems logical that they go together. But strictly speaking, there is no way to guess from #include <GLFW/glfw3.h> which library that file contain headers for (that is not just theory. In practice, for example, the well known libc declarations are in many different headers)
So, you will also have to specify a -l option. In your case -lglfw.
This seems over complicated, because in your case you compile and like in a single command (goes from .c to executable directly). But that are two different operations done in one command.
Creation of an executable from .c code source is done in two stage.
Compilation itself. Creating .o from .c (many .c for big codes), so many compilation commands. Using command such as
gcc -I /path/where/to/find/headers -c mycode.c -o mycode.o
Those are not related to the library. So no -l (and therefore no -L) for that. What is compiled is your code, so just your code is needed at this stage. Plus the header files, because your code refers to unknown function and types, and the compiler needs to know, not their code, but at least declarations that they really exist, and what are the types expected and returned by the functions is the headers files.
Then, once all the .o are compiled, you need to put together all compiled code, yours (the .o) and the libraries (which are somehow a sort of .zip of .o) to create an executable. That is called linking. And is done with commands like
gcc -o myexec mycode1.o mycode2.o -L /path/where/to/search/for/libraries -lrary
(-lbla is a compact way to include /path/where/to/search/for/libraries/libbla.so or /path/where/to/search/for/libraries/libbla.a)
At this stage, you no longer need -I or anything related to headers. The code is already compiled, headers has no role left. But you need everything needed to find the compile code of the libraries.
So, tl;dr
At compilation stage (the stage that raises the error you have for now), you need -I option so that the compiler knows where to find GLFW/glfw3.h
But that alone wont avoid you the next error that will occur at linking stage. At this stage, you need -lglfw to specify that you want to use that library, and a -L option so that the compiler knows where to find a libglfw.so
I am able to wrap C code and access it from the OCaml interpreter, but cannot build a binary! I'm 98% sure it is some linking problem, but can't find the tools to explore the linkage.
Getting even to this point was a chore, (endless quantities of Error: The external function is not available messages) so I'll document everything I did.
A 'system' file stuff.c
#include <stdio.h>
int fun(int z) // Emulate a "real" subroutine
{
printf("duuude whoa z=%d\n", z);
return 42;
}
Compile above as
cc -fPIC -c stuff.c
ld -shared -o libstuff.so stuff.o
An OCaml wrapper around above, in ocmstuff.c:
#include <caml/mlvalues.h>
CAMLprim value yofun(value z) {
return Val_long(fun(Long_val(z)));
}
Build above as
cc -fPIC -c ocmstuff.c
ld -shared -o dllostuff.so ocmstuff.o -L . -lstuff -lc -rpath .
Yes, the rpath really is needed, else the next steps suffer. (Edit: If you don't use rpath, you'll need to use LD_LIBRARY_PATH=. instead. For the final 'production' version, you'd change the rpath to the actual library path, or do ld.so.conf trickery or install into 'standard' locations, or tell your users about LD_LIBRARY_PATH. This is just like what you'd do for any other system. The rpath solution seems to be the most stable and reliable solution.)
Next, a module declaration, stored in fapi.mli
module Fapi : sig
external ofun : int -> int = "yofun" ;;
end
Build above as:
ocamlc -a -o fapi.cma -intf fapi.mli -dllib -lostuff
Does it work? Yes it does:
$ rlwrap ocaml fapi.cma
OCaml version 4.11.1
open Fapi ;;
Fapi.ofun 33 ;;
duuude whoa z=33
- : int = 42
#
So the wrapper works fine. Now lets compile with it. Here's myprog.ml:
open Fapi ;;
Fapi.ofun 33 ;;
Compile it:
ocamlc -c myprog.ml
ocamlc -o myprog myprog.cmo fapi.cma
The very last command spews:
File "_none_", line 1:
Error: Required module `Fapi' is unavailable
I am 98% sure the above error is due to some silly linking error, but I cannot track it down. Why do I think this? Well, here's a related problem that provides a hint.
$ rlwrap ocaml
open Fapi ;;
# Fapi.ofun 33 ;;
Error: The external function `yofun' is not available
#
Well, that's odd. It clearly must have found fapi.cma because that is the only way it can know about yofun. But somehow, it doesn't know it needs to dig into dllostuff.so for that. Or possibly dllostuff.so is failing to correctly link/load libstuff.so ? Or maybe libc.so to get printf ? I'm pretty sure its one of these last few, but I just can't get it to work, and don't have the tools to debug it. (nm and ldd -r look healthy. Are there some similar tools for the assorted cma,cmo,cmi,cmx files?)
Interfacing with C is much easier if you use dune. You don't need to know the low-level details it is all handled for you.
Now, back to your example. This is definitely not how OCaml users are interfacing with C, but if you really want to learn about it here are a few notes.
The reason why you have the error is that:
you specified modules in an incorrect order, it should be topological, not reverse topological order, i.e., the dependency comes before dependent
you do not have the .ml file (the -intf option means very different)
The reason why the last snippet doesn't work is because you're not loading the library. The ocaml binary obviously doesn't have any fapi units linked into it, so you have to explicitly load it using either #load directive or by passing it in the command line.
Also the following line is not necessary,
ld -shared -o dllostuff.so ocstuff.o -L . -lstuff -lc -rpath .
First of all, there is no need to link a stub file into a shared library. It is counterproductive and doesn't really bring you anything. Second, passing -rpath . will render the end executable unusable, unless the shared objects are stored in the same folder as the executable. Just remove this.
Just to complete your exercise, here is how it could be built and run. First, let's fix the stub file. We need the ml file and we also need to remove an extra module definition,
$ cat fapi.{ml,mli}
external ofun : int -> int = "yofun" ;;
external ofun : int -> int = "yofun" ;;
Yes, they are the same. The mli file is not really needed here, but let's keep it for the sake of completeness.
The way how you build the pure C part is fine, as long as you get a relocatable .so file it works.
Now to build the ocstuff.c (which we conventionally call stubs) you just need to do,
ocamlc -c ocstuff.c
Don't turn it into a shared library, don't do anything else with it. Now let's build the fapi library,
ocamlc -c fapi.mli
ocamlc -c fapi.ml
Now let's build the library that contains both OCaml and C code,
ocamlmklib -o fapi fapi.cmo ocstuff.o -lstuff -L.
Now we can finally build the executable,
ocamlc -c myprog.ml
LD_LIBRARY_PATH=. ocamlc -o myprog fapi.cma myprog.cmo
and run it,
LD_LIBRARY_PATH=. ./myprog
duuude whoa z=33
Notice that we have to use the LD_LIBRARY_PATH to tell the system dynamic loader where to look for the external dependency libstuff.so. You can, of course, use rpath to specify its location (pass it to ocamlmklib via -ccopt) but in general it is assumed that the external library is installed at some location that the system loader knows.
Again, unless you're developing your own build system, please use dune or oasis for building OCaml programs. These systems will handle all low-level details in the best possible way.
P.S. It is also worth mentioning that you're not building a binary, but a bytecode executable. For binaries, you will have to use the ocamlopt compiler. And this would be a completely different story. Again, dune is the solution.
There is a lot to take in here, but these lines are suspicious:
ocamlc -c myprog.ml
ocamlc -o myprog myprog.cmo fapi.cma
OCaml expects modules in topologically sorted order, with a module appearing on the command line before the modules that refer to it.
So it would seem the last line should be this:
ocamlc -o myprog fapi.cma myprog.cmo
I hope this helps, it's just a quick response.
The answer provided by ivg works. It also provides enough hints to retrofit the original question to get the correct behavior. The changes to the original recipe are:
Create fapi.mli and fapi.ml which both have the same content: external ofun : int -> int = "yofun" ;;
Compile both the above with ocaml -c. The mli must be compiled first: it yields an interface file cmi which is needed before the ml file can be compiled into it's object file cmo.
The name dllostuff.so was wrong: it must be dllfapi.so to maintain naming consistency.
Build the cma archive/library as ocamlc -a -o fapi.cma fapi.cmo -dllib -lfapi
That's it! Other than these, the original instructions work. The answer from ivg suggests using
ocamlmklib -o fapi fapi.cmo ostuff.o -L. -lstuff
instead of
ld -shared -o dllfapi.so ostuff.o -L. -lstuff
Either of these work. The primary difference is that ocamlmklib also creates a static-linked library libfapi.a. Other than that, it creates the dllfapi.so as before. (That version also contains a motley assortment of typical gcc symbols, for handling exceptions, library ctors, etc. It's not clear why these are needed here, since they'll show up sooner or later anyway.)
I recently started compiling/linking my C files by hand using the gcc command. However it requires all of the source files to be typed at the end of the command. When there are many files to compile/link it can be boring.
That's why I had the idea of making a bash alias for the command which would directly type all *.h and *.c files of the folder.
My line in .bashrc is this:
alias compile='ls *.c *.h | gcc -o main'
I found it to work some times but most of the time compile will return this :
gcc: fatal error: no input files
compilation terminated.
I thought that pipe would give the results of ls *.c *.h as arguments to gcc but it doesn't seem to work that way. What am I doing wrong? Is there a better way to achieve the same thing?
Thanks for helping
A pipe does not create command line arguments. A pipe feeds standard input.
You need xargs to convert standard input to command line arguments.
But you don't need (or want) xargs or ls or standard input here at all.
If you just want to compile every .c file into your executable then just use:
gcc -o main *.c
(You don't generally need .h files on gcc command lines.)
As Kay points out in the comments the pedantically correct and safer version of the above command is (and I don't intend this in a pejorative fashion):
gcc -o main ./*.c
See Filenames and Pathnames in Shell: How to do it Correctly for an extensive discussion of the various issues here.
That being said you can use any of a number of tools to save you from needing to do this and from needing to rebuild everything when only some things change.
Tools like make or its many clones, "front-ends" (e.g. the autotools, cmake) or replacements (tup, scons, cons, and about a million other tools).
Have you tried using a makefile? It sounds like that might be more efficient for what you're trying to do.
If you really want to do it with BASH aliases, you have to use xargs to get standard input to command line arguments.
There are several misconceptions here:
the pipe redirects the standard output of the first command to the standard input of the second command; however, gcc doesn't accept the files to compile on stdin, but on the command line;
the wildcard syntax is not something that is magical just to ls, it's the shell that performs their expansion on the command line;
header files are not to be compiled - you compile .c files, which in turn may include headers.
Armed with this knowledge, you'll understand that the correct command something like
gcc -o main *.c
Actually we can do better: first of all, you'll want to change the *.c to ./*.c; this prevents files whose name start with a - from being interpreted as command line options.
Most importantly, you should really enable the compiler warnings, they can be life saver. You'll want to add -Wall and -Wextra.
gcc -Wall -Wextra -o main ./*.c
Finally, it's worth saying that by default you are compiling with optimizations disabled. If you are debugging that's OK, but you want also to add -g to have an executable usable in debugging; otherwise, if the target is speed you should at least add -O2.
I don't get it. I usually install third party software into /usr/local so libraries are installed into /usr/local/lib and never had problems linking to these libraries. But now it suddenly no longer works:
$ gcc -lkaytils -o test test.c
/usr/bin/ld.gold.real: error: cannot find -lkaytils
/usr/bin/ld.gold.real: /tmp/ccXwCkYk.o: in function main:test.c(.text+0x15):
error: undefined reference to 'strCreate'
collect2: ld returned 1 exit status
When I add the parameter -L/usr/local/lib than it works but I never had to use this before. Header files in /usr/local/include are found without adding -I/usr/local/include.
I'm using Debian GNU/Linux 6 (Squeeze) which has an entry for /usr/local/lib in /etc/ld.so.conf.d/libc.conf by default and the ldconfig cache knows the library I'm trying to use:
k#vincent:~$ ldconfig -p | grep kaytils
libkaytils.so.0 (libc6,x86-64) => /usr/local/lib/libkaytils.so.0
libkaytils.so (libc6,x86-64) => /usr/local/lib/libkaytils.so
So what the heck is going on here? Where can I check which library paths are searched by gcc by default? Maybe something is wrong there.
gcc -print-search-dirs will tell you what path the compiler checks. /usr/local/lib is simply not among them, so your compile time linker (in this case the new gold ld from binutils) doesn't find the library while the dynamic one (ld-linux.so which reads the cache written by ldconfig) does. Presumably the builds you've done previously added -L/usr/local/lib as necessary in their makefiles (usually done by a ./configure script), or you installed binaries.
This is probably an issue of environment variables - you have something set that's including /usr/local/include but not /usr/local/lib
From the GCC mapage on environment variables
CPATH specifies a list of directories to be searched as if speci‐
fied with -I, but after any paths given with -I options on the com‐
mand line. This environment variable is used regardless of which
language is being preprocessed.
and
The value of LIBRARY_PATH is a colon-separated list of directories,
much like PATH. When configured as a native compiler, GCC tries
the directories thus specified when searching for special linker
files, if it can’t find them using GCC_EXEC_PREFIX. Linking using
GCC also uses these directories when searching for ordinary
libraries for the -l option (but directories specified with -L come
first).
try "printenv" to see what you have set
I am new with using gcc and so I have a couple of questions.
What do the following switches accomplish:
gcc -v -lm -lfftw3 code.c
I know that lfftw3 is an .h file used with code.c but why is it part of the command?
I couldn't find out what -lm does in my search. What does it do?
I think I found out -v causes gcc to display programs invoked by it.
-l specifies a library to include. In this case, you're including the math library (-lm) and the fftw3 library (-lffw3). The library will be somewhere in your library path, possibly /usr/lib, and will be named something like libffw3.so
From GCC's man page:
-v Print (on standard error output) the commands executed to run the
stages of compilation. Also print the version number of the
compiler driver program and of the preprocessor and the compiler
proper.
-l library
Search the library named library when linking. (The second
alternative with the library as a separate argument is only for
POSIX compliance and is not recommended.)
It makes a difference where in the command you write this option;
the linker searches and processes libraries and object files in the
order they are specified. Thus, foo.o -lz bar.o searches library z
after file foo.o but before bar.o. If bar.o refers to functions in
z, those functions may not be loaded.
The linker searches a standard list of directories for the library,
which is actually a file named liblibrary.a. The linker then uses
this file as if it had been specified precisely by name.
The directories searched include several standard system
directories plus any that you specify with -L.
Normally the files found this way are library files---archive files
whose members are object files. The linker handles an archive file
by scanning through it for members which define symbols that have
so far been referenced but not defined. But if the file that is
found is an ordinary object file, it is linked in the usual
fashion. The only difference between using an -l option and
specifying a file name is that -l surrounds library with lib and .a
and searches several directories.
libm is the library that math.h uses, so -lm includes that library. You might want to get a better grasp of the concept of linking. Basically, that switch adds a bunch of compiled code to your program.
-lm links your program with the math library.
-v is the verbose (extra ouput) flag for the compiler.
-lfftw3 links your program with fftw3 library.
You just include headers by using #include "fftw3.h". If you want to actually include the code associated to it, you need to link it. -l is for that. Linking with libraries.
arguments starting with -l specify a library which is linked into the program. Like Pablo Santa Cruz said, -lm is the standard math library, -lfftw3 is a library for fourier transformation.
Try man when you're trying to learn about a command.
From man gcc
-v Print (on standard error output) the commands executed to run the
stages of compilation. Also print the version number of the
com-
piler driver program and of the preprocessor and the compiler
proper.
As Pablo stated, -lm links your math library.
-lfftw3 links in a library used for Fourier transforms. The project page, with more info can be found here:
http://www.fftw.org/
The net gist of all these statements is that they compile your code file into a program, which will be named the default (a.out) and is dependent on function calls from the math and fourier transform libs. The -v statement just helps you keep track of the compilation process and diagnose errors should occur.
In addition to man gcc which should be the first stop for questions about any command, you can also try the almost standard --help option. Even for commands that don't support it, an unsupported option usually causes it to print an error containing usage information that should hint at a similar option. In this case, gcc will display a terse (for gcc, its only about 50 lines long) help summary listing the small number of options that are understood by the gcc program itself rather than passed on to its component programs. After the description of the --help option itself, it lists --target-help and -v --help as ways to get more information about the target architecture and the component programs.
My MinGW GCC 3.4.5 installation generates more than 1200 lines of output from gcc -v --help on Windows XP. I'm pretty sure that doesn't get much smaller in other installations.
It would also be a good idea to read the official manual for GCC. It is also helpful to read the documentation for the linker (ld) and assembler (often gas or just as, but it may be some platform specific assembler as well); aside from a platform-specific assembler, these are documented as part of the binutils collection.
General familiarity with the command line style of Unix tools is also helpful. The idea that a single-character option's value might not be delimited from the option name is a convention that goes back essentially as far as Unix does. The modern convention (promulgated by GNU) that multiple-character option names are introduced by -- instead of just - implies that -lm might be a synonym for -l m (or the pair of options -l -m in some conventions but that happens not to be the case for gcc) but it is probably not a single option named -lm. You will see a similar pattern with the -f options that control specific optimizations or the -W options that control warnings, for example.