Unable to link to libgfortran.a [duplicate] - c

This question already has answers here:
Why does the order in which libraries are linked sometimes cause errors in GCC?
(9 answers)
Closed 8 years ago.
I have gfortran installed on my system and the file libgfortran.a can be found at /usr/lib/gcc/x86_64-linux-gnu/4.6/. Using nm I made sure that the function _gfortran_compare_string is defined in there:
$ nm /usr/lib/gcc/x86_64-linux-gnu/4.6/libgfortran.a | grep _gfortran_compare_string
Returns
0000000000000000 T _gfortran_compare_string
0000000000000000 T _gfortran_compare_string_char4
But, the linker of my CUDA-C program throws errors:
/usr/local/cuda-6.0/bin/nvcc --cudart static -L/usr/lib/gcc/x86_64-linux-gnu/4.6 -L/home/chung/lapack-3.5.0 -link -o "pQP" ./src/pQP.o -lgfortran -llapacke -llapack -lcublas -lblas -lcurand
nvcc warning : The 'compute_10' and 'sm_10' architectures are deprecated, and may be removed in a future release.
/home/chung/lapack-3.5.0/liblapack.a(ilaenv.o): In function `ilaenv_':
ilaenv.f:(.text+0x81): undefined reference to `_gfortran_compare_string'
and later on another error, again related to libgfortran:
/home/chung/lapack-3.5.0/liblapack.a(xerbla.o): In function `xerbla_':
xerbla.f:(.text+0x49): undefined reference to `_gfortran_st_write'
xerbla.f:(.text+0x54): undefined reference to `_gfortran_string_len_trim'
xerbla.f:(.text+0x66): undefined reference to `_gfortran_transfer_character_write'
xerbla.f:(.text+0x76): undefined reference to `_gfortran_transfer_integer_write'
xerbla.f:(.text+0x7e): undefined reference to `_gfortran_st_write_done'
xerbla.f:(.text+0x87): undefined reference to `_gfortran_stop_string'
But, again using nm, I found that _gfortran_st_write, etc are defined in libgfortran.a.
Links: Complete log and source code.
Note: Lapack makes use of libgfortran. I recently installed lapack and ran all the tests and they all passed.

You need to change the order in which you specify static libraries to the linker. If you do something like this:
nvcc --cudart static -L/usr/lib/gcc/x86_64-linux-gnu/4.6 \
-L/home/chung/lapack-3.5.0 -link -o "pQP" ./src/pQP.o \
-llapacke -llapack -lcublas -lblas -lcurand -lgfortran
You should find it will work.
The underlying reason (and this is a trait of the gcc/gnu toolchain and not anything to do with nvcc) is that linking dependency lists for static libraries are parsed from left to right by the gnu linker. If you specify a static library before any library which depends on it, it will be skipped because it has no dependencies in the link list at the point in processing when it is first encountered.

Related

What could be causing linking errors when compiling in an Alpine Docker?

I am trying to compile a program within a docker container built from the Alpine 3.7 base image. The program uses argp.h, and includes it as #include <argp.h>. I have installed argp-standalone and verified that it is making it onto the image. The file argp.h is located in usr/include, however when I compile my program using the following commands:
gcc -W -Wall -Wextra -I/usr/include -c -o progname.o progname.c
gcc -largp -o progname progname.o
I get the following error:
progname.o: In function `parse_opt':
progname.c:(.text+0x4c9): undefined reference to `argp_failure'
progname.c:(.text+0x50f): undefined reference to `argp_failure'
progname.c:(.text+0x555): undefined reference to `argp_failure'
progname.c:(.text+0x59b): undefined reference to `argp_failure'
progname.c:(.text+0x5ce): undefined reference to `argp_error'
progname.c:(.text+0x5f4): undefined reference to `argp_error'
progname.o: In function `main':
progname.c:(.text+0x1397): undefined reference to `argp_parse'
collect2: error: ld returned 1 exit status
make: *** [Makefile:9: progname] Error 1
I have:
Ensured that the version of argp.h which is on the image does in fact include the argp_failure, argp_parse, and argp_error functions.
Tried moving argp.h into different locations on the machine (e.g. into the same directory where compilation is taking place, into /usr/lib)
Tried compiling with -l and -L.
The relevant packages also installed in the image are build-base, make, and gcc.
When compiling on an Ubuntu image these same commands work fine, even without the -largp and -I/usr/include flags. What could be happening differently within an Alpine image which would cause this not to work?
Edit
As per #Pablo's comment, I'm now compiling it as follows:
gcc -W -Wall -Wextra -I/usr/include -L/usr/lib -c -o progname.o progname.c
gcc -largp -o progname progname.o
After having verified that the static library, libargp.a, is located in /usr/lib. However, the same problem still persists.
Edit 2
Compiling as follows (and once again as per #Pablo's suggestion) has resolved the error I was having:
gcc -W -Wall -Wextra -I/usr/include -L/usr/lib -c -o progname.o progname.c
gcc -o progname progname.o /usr/lib/libargp.a
However, I am still curious why, using the exact same library and instructions, this would fail to compile in an Alpine image while compiling without issue in an Ubuntu image.
I am still curious why, using the exact same library and instructions, this would fail to compile in an Alpine image while compiling without issue in an Ubuntu image.
The reason for the linking error on Alpine may be kind of surprising, and is actually not specific to Alpine.
While this fails to link:
gcc -largp -o progname progname.o
This works:
gcc -o progname progname.o -largp
The reason is the order of parameters passed to the linker, and it related to the linking algorithm. Typically, in the linking command line objects are specified first (and possibly user's static libraries in any), then libraries using -l. The standard linker algorithm is explained perfectly in Eli Bendersky's article, Library order in static linking:
Object files and libraries are provided in a certain order on the command-line, from left to right. This is the linking order. Here's what the linker does:
The linker maintains a symbol table. This symbol table does a bunch of things, but among them is keeping two lists:
A list of symbols exported by all the objects and libraries encountered so far.
A list of undefined symbols that the encountered objects and libraries requested to import and were not found yet.
When the linker encounters a new object file, it looks at:
The symbols it exports: these are added to the list of exported symbols mentioned above. If any symbol is in the undefined list, it's removed from there because it has now been found. If any symbol has already been in the exported list, we get a "multiple definition" error: two different objects export the same symbol and the linker is confused.
The symbols it imports: these are added to the list of undefined symbols, unless they can be found in the list of exported symbols.
When the linker encounters a new library, things are a bit more interesting. The linker goes over all the objects in the library. For each one, it first looks at the symbols it exports.
If any of the symbols it exports are on the undefined list, the object is added to the link and the next step is executed. Otherwise, the next step is skipped.
If the object has been added to the link, it's treated as described above - its undefined and exported symbols get added to the symbol table.
Finally, if any of the objects in the library has been included in the link, the library is rescanned again - it's possible that symbols imported by the included object can be found in other objects within the same library.
When -largp appears first, the linker does not include any of its objects in the linking procedure, since it doesn't have any undefined symbols yet. If the static library is provided by path, and not with -l, then all of its objects are added to the linking procedure.

Undefined reference to RSA_generate_key in OpenSSL? [duplicate]

This question already has answers here:
Linking libssl and libcrypto in GCC [duplicate]
(2 answers)
Closed 6 years ago.
I have the following code in the file rsatest.c. I'm trying to generate a RSA key pair.
#include <openssl/rsa.h>
#include <openssl/pem.h>
int main(){
RSA *rsa = RSA_generate_key((const int) 1024,(const int) 3, 0, 0);
return 0;
}
I'm compiling this with
gcc -I../include/ -L . -lcrypto -lssl rsatest.c
and I get the following error.
undefined reference to `RSA_generate_key'
Am I linking the library files in the wrong order? I made the libcrypto.a and libssl.a on windows (64 bit), with msys and mingw, and I'm running the code on the same system.
RSA_generate_key has been declared in rsa.h. Is it not defined in libcrypto.a?
EDIT :
I tried this too,
gcc -I../include rsatest.c -L . -lcrypto -lssl
and I understand that the linker will look for definitions in the libraries going from left to right.
However, I get new undefined references to various functions in
rand_win.o and c_zlib.o
I looked up online and found the missing symbols in the libraries gdi32 and zlib. So I added
-lz and -lgdi32
The compiler did not complain about a missing library, so I assume they are present with mingw. And still, I get the same output.
I also used nm, and found that the symbols were indeed undefined in rand_win.o and c_zlib.o.
Why cant the linker seem to find definitions in these libraries?
Change the order in your gcc command.
gcc -I../include/ rsatest.c -L . -lcrypto -lssl
As far as I know linker has a list of undefined symbols. When it processes libcrypto.a and libssl.a it does not have anything in the list of undefined symbols so he just drops the libraries. Then after processing rsatest it has something in its list but it does not look for symbols in already processed libraries.

Creating a dylib which gets linked at runtime

I am trying to create a dynamic library which is meant to be linked and loaded into a host environment at runtime (e.g. similar to how class loading works in Java). As such, I want the dynamic library to be left with a few "dangling" references, which I expect it to pick up from its host environment when it is loaded into that environment.
My problem is that I cannot figure out how to create the dynamic library without explicitly linking it to existing symbols. I am hoping to produce a dynamic library that does not depend on a specific host executable (or host library), rather one that is able to be loaded (e.g. by dlopen) in any host as long as the host makes a couple symbols available for use.
Right now, any linking command I've tried results in a complaint of missing symbols. I'd like it to allow symbols to be missing (ideally, just particularly specified symbols).
For example, here's a transcript with the error on OS X:
$ cat frotz.c
void blort(void);
void run(void) {
blort();
}
$ cc -c -o frotz.o frotz.c
$ cc -dynamiclib -o libfrotz.dylib frotz.o
Undefined symbols for architecture x86_64:
"_blort", referenced from:
_run in frotz.o
ld: symbol(s) not found for architecture x86_64
clang: error: linker command failed with exit code 1 (use -v to see invocation)
If I do the same thing using a GNU toolchain (on Linux), it helpfully tells me:
$ gcc -shared -o libfrotz.so frotz.o
/usr/bin/ld: frotz.o: relocation R_X86_64_PC32 against undefined symbol `blort'
can not be used when making a shared object; recompile with -fPIC
and indeed, adding -fPIC to the C compile command seems to fix the problem in that environment. However, it doesn't seem to have any effect in OS X.
All the other dynamic-linking questions I could find on SO seem to be about the more usual arrangement of libraries, where a library is being built to be linked into an executable before that executable runs, rather than the other way around. The closest related question I found was this:
Can an executable be linked to a dynamic library after its built?
which unfortunately has very little info, none of it relevant to the question I'm asking here.
UPDATE: I distilled the info from the answer along with everything else I'd figured
out, and put together this example:
https://github.com/danfuzz/dl-example
As far as my knowledge goes, you want to use weak linkage:
// mark function as weakly-linked
extern void foo() __attribute__((weak));
// inform the linker about that too
clang -dynamiclib -o bar.dylib bar.o -flat_namespace -undefined dynamic_lookup
If a weak function can be resolved at runtime, it will then be resolved. If it can't, it will be NULL, instead of generating a runtime (or, obviously, link-time) error.

"undefined reference to `pow'" even with math.h and the library link -lm [duplicate]

This question already has answers here:
Undefined reference to `pow' and `floor'
(6 answers)
Closed 2 years ago.
I'm using math.h and the -lm option to compile. I have tried all of:
gcc -o ssf ssf_tb.c ssf.c -lm
gcc -o ssf ssf_tb.c -lm ssf.c
gcc -o -lm ssf -lm ssf_tb.c ssf.c
but the error:
undefined reference to 'pow'
happens on all cases.
Put the -lm at the end of the line.
gcc processes the arguments that specify inputs to the final program in the order they appear on the command line. The -lm argument is passed to the linker, and the ssf.c argument, for example, is compiled, and the resulting object file is passed to the linker.
The linker also processes inputs in order. When it sees a library, as -lm specifies, it looks to see if that library supplies any symbols that the linker currently needs. If so, it copies the modules with those symbols from the library and builds them into the program. When the linker sees an object module, it builds that object module into the program. After bringing an object module into the program, the linker does not go back and see if it needs anything from earlier libraries.
Because you listed the library first, the linker did not see anything that it needed from the library. If you list the object module first, the linker will bring the object module into the program. In the process of doing this, the linker will make a list of all the undefined symbols that the object needs. Then, when the linker sees the library, it will see that the library supplies definitions for those symbols, and it will bring the modules with those symbols into the program.

Using LD_PRELOAD to overload call to a C function of a shared library

I'm following this answer to override a call to a C function of a C library.
I think I did everything correctly, but it doesn't work:
I want to override the "DibOpen"-function. This is my code of the library which I pass to LD_PRELOAD environment-variable when running my application:
DIBSTATUS DibOpen(void **ctx, enum Board b)
{
printf("look at me, I wrapped\n");
static DIBSTATUS (*func)(void **, enum Board) = NULL;
if(!func)
func = dlsym(RTLD_NEXT, "DibOpen");
printf("Overridden!\n");
return func(pContextAddr, BoardType, BoardHdl);
}
The output of nm lib.so | grep DibOpen shows
000000000001d711 T DibOpen
When I run my program like this
LD_PRELOAD=libPreload.so ./program
I link my program with -ldl but ldd program does not show a link to libdl.so
It stops with
symbol lookup error: libPreload.so: undefined symbol: dlsym
. What can I do to debug this further? Where is my mistake?
When you create a shared library (whether or not it will be used in LD_PRELOAD), you need to name all of the libraries that it needs to resolve its dependencies. (Under some circumstances, a dlopened shared object can rely on the executable to provide symbols for it, but it is best practice not to rely on this.) In this case, you need to link libPreload.so against libdl. In Makefile-ese:
libPreload.so: x.o y.o z.o
$(CC) -shared -Wl,-z,defs -Wl,--as-needed -o $# $^ -ldl
The option -Wl,-z,defs tells the linker that it should issue an error if a shared library has unresolved undefined symbols, so future problems of this type will be caught earlier. The option -Wl,--as-needed tells the linker not to record a dependency on libraries that don't actually satisfy any undefined symbols. Both of these should be on by default, but for historical reasons, they aren't.

Resources