When making a library on unix, is anything but "ar rcs" necessary? - linker

I have a number of source files I want to agglomerate into a .a file. I make the library with the command
ar rcs libcathat.a thing1.o thing2.o fish.o
I then attempt to link to this library with the same compiler I used to make the .o files (g++):
g++ -L/path/to/cathat -lcathat seuss.o -o seuss
But this produces errors when I try to use functions defined in thing1.cpp (and in theory represented in thing.o) of the form:
/path/seuss.cpp:46: undefined reference to `redFishBlueFish(int, char**)'
Is there something else I need to do to a .a file to make it possible to link to it?

Try moving the linker statements to the end:
g++ seuss.o -o seuss -L/path/to/cathat -lcathat
If that doesn't work, make sure those symbols are actually in the archive:
nm libcathat.a

Usually, you don't need to do anything else on most modern versions of Unix.
On some, mainly older, versions of Unix, it was necessary to use ranlib on a library to add a lookup table that allowed the linker to find symbols quickly. Almost all modern versions of ar do this automatically. Needing ranlib is something of a hangover from the 'bad old days' of 7th Edition UNIX™.
For some reason which I can't now locate, I was building archives on Mac OS X 10.7.4 with ranlib too. I must have had a reason for doing so, but that reason seems to be irrelevant now — archive libraries seem work OK without ranlib on Mac OS X 10.7.4, at least for a single architecture. I did find a change I made in July 2004 that put ranlib back into a make program, but the check-in notes don't say why I made the change. I've updated the rule definition file so it no longer uses ranlib.

Related

ld: warning: directory not found for option: -LC_ID_DYLIB=/usr/lib

I'm using OSX command line gcc and attempting to build a dynamic library. when I do the build I get the following warning. How is it it is not finding this library given /usr/lib is well known? And /usr/lib does indeed exist on my machine
this is what I am using:
gcc -arch i386 cata/*.c -dynamiclib -o build/cata.dylib -LC_ID_DYLIB=/usr/lib
Thanks
the way i solved it was to make it so the string that got stuck in the library (on where to find the library at runtime) was relative to nowhere -- if that makes sense. so it would be forced to use the LD_LOAD_PATH.
I was using the other flags because someone suggested I use them.
so the gcc i ended up using is this:
# my tree is like this
# cata/*.c
# build/*.dylib
#
cd build
gcc -arch i386 ../cata/*.c -dynamiclib -o cata.dylib
Doing this compiles/makes a library in the same directory where it thinks it is 'used' (basically having no path). I am now free to put it somewhere else. When it is later linked at compile time by a different program and then examined using
otool -L
it appears with no path in front of the library name. This is apparently preferable as now when the system goes to try to find it it resorts to looking at the standard libraries and eventually finds it (because I install it to one of the standard locations).
In the original way, otool -L was showing it having a required path of
'build/cata.dylib'
This made it un-findable and which is why i was trying to use the apple documentation to get around the problem.
This doesn't really solve why LC_ID_DYLIB doesn't work. I looked into the Loader.h file (line 643) and it has room for an identifier(0xd), a path, and a structure, so I don't really understand why my path wasn't getting picked up. but its two different topics. Loader.h is runtime and the other is gcc AFAIK. I'm still learning apple.

C: Compiling pre-compiled code as inline

For some rather complicated reason, I have a set of files which I would like to compile seperatly and then link, but so that the functions in one are placed inline in the second. This is because I would like them to be compiled with different flags in GCC. I know I could fix the problem by looking into how I could get around that, but I would like to know if this is possible.
EDIT 1:
If not, is it possible to compile the 'external' functions into a form of assembly that I could include in the other file. Yes crazy but also cool...
Having a quick look, this could well be an option. I guess it would be impossible to automatically compile it in, so could someone please give me a bit of information about assembly? I've only used basic ARM assembly. I've compiled to toy functions with the -S flag in GCC. How do I link registers with variables? Will they always be in the same order? The function will be highly optimised. When should I start and end the extract? Should I include .cfi_startproc at the start and .cfi_def_cfa 7, 8 at the end?#
EDIT 2:
This post details how gcc can do link-time optimisations like this with -flto. Sadly this is only available with version 4.5, which I do not have nor have the ability to install since I do not have root access of the machine I need to compile this on. Another possible solution would be to explain how I could install a different version of GCC into a folder on a unix machine.
As far as I know gcc doesn't do linktime optimizations (inlining in particular), at least with the standard ld linker (it could be that the new gold linker does it, but I really don't think so). Clang in principle should be capable of doing it, since it depends on LLVM, which supports link time optimizations (it seems that your question is gcc spacific, though).
From your question though, it seems you are looking for a a way to merge object files after compilation, not necessarily by inlining their contained functions. This can be done in multiple ways:
Archiving them into a static library with ar: e.g. ar libfoo.a obj1.o obj2.o.
Combining them together into a third relocatable object (ld's --relocatable option). gcc -Wl,--relocatable -o obj3.o obj1.o obj2.o
Putting them into a shared library (beware that this requires compiling the objects with -fPIC) e.g. gcc -shared -o libfoo.so obj1.o obj2.o
You could compile with the -c option to create a set of .o files, or even make a .so file. Then use the sequence you like in the linking phase of gcc.

gcc switches - what do these do?

I am new with using gcc and so I have a couple of questions.
What do the following switches accomplish:
gcc -v -lm -lfftw3 code.c
I know that lfftw3 is an .h file used with code.c but why is it part of the command?
I couldn't find out what -lm does in my search. What does it do?
I think I found out -v causes gcc to display programs invoked by it.
-l specifies a library to include. In this case, you're including the math library (-lm) and the fftw3 library (-lffw3). The library will be somewhere in your library path, possibly /usr/lib, and will be named something like libffw3.so
From GCC's man page:
-v Print (on standard error output) the commands executed to run the
stages of compilation. Also print the version number of the
compiler driver program and of the preprocessor and the compiler
proper.
-l library
Search the library named library when linking. (The second
alternative with the library as a separate argument is only for
POSIX compliance and is not recommended.)
It makes a difference where in the command you write this option;
the linker searches and processes libraries and object files in the
order they are specified. Thus, foo.o -lz bar.o searches library z
after file foo.o but before bar.o. If bar.o refers to functions in
z, those functions may not be loaded.
The linker searches a standard list of directories for the library,
which is actually a file named liblibrary.a. The linker then uses
this file as if it had been specified precisely by name.
The directories searched include several standard system
directories plus any that you specify with -L.
Normally the files found this way are library files---archive files
whose members are object files. The linker handles an archive file
by scanning through it for members which define symbols that have
so far been referenced but not defined. But if the file that is
found is an ordinary object file, it is linked in the usual
fashion. The only difference between using an -l option and
specifying a file name is that -l surrounds library with lib and .a
and searches several directories.
libm is the library that math.h uses, so -lm includes that library. You might want to get a better grasp of the concept of linking. Basically, that switch adds a bunch of compiled code to your program.
-lm links your program with the math library.
-v is the verbose (extra ouput) flag for the compiler.
-lfftw3 links your program with fftw3 library.
You just include headers by using #include "fftw3.h". If you want to actually include the code associated to it, you need to link it. -l is for that. Linking with libraries.
arguments starting with -l specify a library which is linked into the program. Like Pablo Santa Cruz said, -lm is the standard math library, -lfftw3 is a library for fourier transformation.
Try man when you're trying to learn about a command.
From man gcc
-v Print (on standard error output) the commands executed to run the
stages of compilation. Also print the version number of the
com-
piler driver program and of the preprocessor and the compiler
proper.
As Pablo stated, -lm links your math library.
-lfftw3 links in a library used for Fourier transforms. The project page, with more info can be found here:
http://www.fftw.org/
The net gist of all these statements is that they compile your code file into a program, which will be named the default (a.out) and is dependent on function calls from the math and fourier transform libs. The -v statement just helps you keep track of the compilation process and diagnose errors should occur.
In addition to man gcc which should be the first stop for questions about any command, you can also try the almost standard --help option. Even for commands that don't support it, an unsupported option usually causes it to print an error containing usage information that should hint at a similar option. In this case, gcc will display a terse (for gcc, its only about 50 lines long) help summary listing the small number of options that are understood by the gcc program itself rather than passed on to its component programs. After the description of the --help option itself, it lists --target-help and -v --help as ways to get more information about the target architecture and the component programs.
My MinGW GCC 3.4.5 installation generates more than 1200 lines of output from gcc -v --help on Windows XP. I'm pretty sure that doesn't get much smaller in other installations.
It would also be a good idea to read the official manual for GCC. It is also helpful to read the documentation for the linker (ld) and assembler (often gas or just as, but it may be some platform specific assembler as well); aside from a platform-specific assembler, these are documented as part of the binutils collection.
General familiarity with the command line style of Unix tools is also helpful. The idea that a single-character option's value might not be delimited from the option name is a convention that goes back essentially as far as Unix does. The modern convention (promulgated by GNU) that multiple-character option names are introduced by -- instead of just - implies that -lm might be a synonym for -l m (or the pair of options -l -m in some conventions but that happens not to be the case for gcc) but it is probably not a single option named -lm. You will see a similar pattern with the -f options that control specific optimizations or the -W options that control warnings, for example.

AIX xlC cross-compilation/linkage for C++ not finding C symbols

I am attempting to cross-compile on AIX with the xlc/xlC compilers.
The code compiles successfully when it uses the default settings on another machine. The code actually successfully compiles with the cross-compilation, but the problem comes from the linker. This is the command which links the objects together:
$(CHILD_OS)/usr/vacpp/bin/xlC -q32 -qnolib -brtl -o $(EXECUTABLE) $(OBJECT_FILES)
-L$(CHILD_OS)/usr/lib
-L$(CHILD_OS)/usr/vacpp/lib/profiled
-L$(CHILD_OS)/usr/vacpp/lib
-L$(CHILD_OS)/usr/vac/lib
-L$(CHILD_OS)/usr/lib
-lc -lC -lnsl -lpthread
-F$(CHILD_OS)$(CUSTOM_CONFIG_FILE_LOCATION)
When I attempt to link the code, I get several Undefined symbols:
.setsockopt(int,int,int,const void*,unsigned long), .socket(int,int,int), .connect(int,const sockaddr*,unsigned long), etc.
I have discovered that the symbols missing are from the standard c library, libc.a. When I looked up the symbols with nm for the libc.a that is being picked up, the symbols do indeed exist. I am guessing that there might be a problem with the C++ being unable to read the C objects, but I am truly shooting in the dark.
Sound like it might be a C++ name mangling problem.
Run nm on the object files to find out the symbols that they are looking for. Then compare the exact names against the libraries.
Then check the compilation commands, to ensure that the right version of the header files is being included - maybe it's including the parent OS's copy by mistake?
I was eventually able to get around this. It looks like I was using the C++ compiler for .c files. Using the xlc compiler instead of the xlC compiler for C files fixed this problem.

Compiling Small Gcc Project on Windows Using MinGW

so I've been programming in C++ for almost 2 years now, and the whole while I've had the pleasure of using an IDE (VS) with lovely project settings and automatic linking and the like. I've always stayed away from any external libraries which required me to compile via makefiles, or at least the ones which were meant for linux environments/other compilers.
Anyways I now want to use a super handy utility (Bob Jenkins Perfect Minimal Hash) but it requires me to compile via makefiles, not only that but using the g++ compiler.
I went ahead and got the mingW32-make utility and am now trying to get it to work. Where I'm at now:
Succesfully installed minGW
Succesfully called the make utility
Failed to succesfully make the project.
The error I get is:
C:\gen_progs\ph>mingw32-make
mingw32-make: *** No rule to make
target lookupa.c', needed by lookupa.o'. Stop.
And the makefile itself:
CFLAGS = -O
.cc.o:
gcc $(CFLAGS) -c $<
O = lookupa.o recycle.o perfhex.o perfect.o
const64 : $(O)
gcc -o perfect $(O) -lm
# DEPENDENCIES
lookupa.o : lookupa.c standard.h lookupa.h
recycle.o : recycle.c standard.h recycle.h
perfhex.o : perfhex.c standard.h lookupa.h recycle.h perfect.h
perfect.o : perfect.c standard.h lookupa.h recycle.h perfect.h
Now the error seems reasonable, at least from my minimal understanding of makefiles, I have all the referenced .c, .h files, however I have none of the .o files and there doesn't appear to be any instructions on how to make these. So my question/s are:
am I calling the make utility wrong? Or do I need to compile the object files first? Or... do I need to add something to the make file?
Again I have all the referenced .c and .h files.
Edit: Sorry about that I was actually missing that specific file it seems to have disapeared somewhere along the line. However, adding it back in this is the error I now get:
c:\gen_progs\ph>mingw32-make
cc -O -c -o lookupa.o lookupa.c
process_begin: CreateProcess(NULL, cc -O -c -o lookupa.o lookupa.c, ...) failed.
make (e=2): The system cannot find the file specified.
mingw32-make: *** [lookupa.o] Error 2
Regarding your error "process_begin: CreateProcess(NULL, cc -O -c -o lookupa.o lookupa.c, ...) failed."
This is because the make utility wants to use the "cc" compiler to compile your program, but that compiler is not part of the Mingw-package.
Solution: Change the ".cc.o:" to ".c.o:". This changes the implicit rule which tells Make what compiler to use (gcc on the next line) when compiling .c files (the original line tells it how to compile .cc files).
Saying either make -DCC=gcc at the command line or adding the line CC=gcc to the top of the Makefile would cure the issue as well. Make's built in rules for handling C source code all name the C compiler with the variable CC, which defaults to "cc" for reasons of backward compatibility even in Gnu Make.
It looks like the original Makefile author tried to work around that problem by supplying a custom rule for compiling .cc files, but since there are no .cc files in the project that rule was not actually used.
Specifying the correct value for CC is superior to fixing the explicit rule to name .c files IMHO because Makefiles are generally easier to use and maintain and are the most portable when the least possible information is specified.
I don't think not having .o files is the problem. Make will make them from the source files (the files to the right of the colon).
Your immediate problem seems to be that make can't file the file "lookupa.c". From the rules you posted, it looks to me like that file should be sitting in the same directory as the makefile, but it isn't. You need to figure out where that file is, and how to get it there.
(For some reason I have a mental image of Wile E. Coyote sitting at his computer, seeing that file name, looking up, and getting plastered with an anvil).

Resources