gcc: Not able to create .so from object files - c

I am trying to create .so dynamic library from *.o files and facing below issue.
LOG:
[nptemp-static]$ gcc -shared *.o -o libexample.so
/usr/bin/ld: bindings_hubbub_parser.o: relocation R_X86_64_32 against `.rodata.str1.8' can not be used when making a shared object; recompile with -fPIC
bindings_hubbub_parser.o: could not read symbols: Bad value
collect2: ld returned 1 exit status
Any idea? Do I need to recompile my whole source code with the option specified?
Actually, I am not aware of the source code which I compiled because all the source code is open source which I downloaded and compiled by following instructions in README.

I am trying to create .so dynamic library from *.o files and facing below issue.
This is not that simple. In practice, you should compile specifically when making a shared library, at least on Linux.
(Perhaps you might need to edit your Makefile or configure somehow your build automation if it was not designed for building a shared library; if building some free software library, you might ask help from its authors or community)
Shared libraries want to have position independent code. So you need to compile their source code with the -fPIC flag passed to g++ or gcc (see this). You could also want to explicit the rpath.
Read Drepper's paper: How To Write Shared Libraries.

Related

GCC have include but not library

I'm writing my own kernel for fun, and in doing so I've needed to install glibc to use the standard C libraries. However, after installing the library to the desired directory, my kernel.c program includes the stdio.h header and attempts to use fopen, however I come across this error:
kernel.c:(.text+0x238): undefined reference tofopen'`
After looking around I noticed that I don't have any actual code to all of the header files, just the header files themselves. So I went and added the -L flag to GCC to add the lib folder that was created during the compilation of glibc and what I've found out is that the lib folder has nothing of what I need.
I poked around and found that the build directory I used when compiling glibc has the .o files I'm looking for (e.g it has iofopen.o for the fopen method).
So what's going on?
If needed, the commands I am using to compile my kernel are:
#!/bin/bash
nasm -felf32 boot.asm -o boot.o
/home/noah/opt/cross/bin/i686-elf-gcc -I/home/noah/Documents/NoahOS/include/ -L/home/noah/Documents/glibc/build -c *.c -std=gnu99 -ffreestanding -Wall -Wextra
/home/noah/opt/cross/bin/i686-elf-gcc -I/home/noah/Documents/NoahOS/include/ -L/home/noah/Documents/glibc/build -T linker.ld -o noahos.bin -ffreestanding -O2 -nostdlib *.o -lgcc
First line builds the boot file, which is assembly.
Second line runs gcc on all of the C language .c files and creates their object files.
Third line links all of the files together with linker.ld and outputs the final kernel to noahos.bin which is a runnable kernel using
qemu-system-i386 -kernel noahos.bin
If needed more information can be provided. Please ask.
You are correctly compiling your kernel using -nostdlib because the kernel can't use the standard library. Why not? Because it doesn't make sense: the standard library is the interface between user programs and the kernel, so that application developers don't need to know the system call specification for your kernel, all that is required is a port of the C library.
Oh, there's the answer. You need a port of the C library to use your own system calls. Starting with glibc might not be the easiest to port (it comes with the kitchen sink).

Custom compile of binutils/ld doesn't find symbols in archives

I'm currently trying to compile Clang/LLVM for a bare metal aarch64 target. Compiling Clang was straightforward - in fact I have compiled to target multiple architectures including arm and aarch64. For the backend I'm using binutils. Since binutils can only target one architecture I've built both aarch64 and arm versions of this. Again building seemed to be straightforward. I built binutils for aarch64 using:
mkdir build
cd build
../configure --target=aarch64-none-elf
make
from an unpacked source package of binutils 2.24.
The problem I'm having is that I can't get my custom build of ld to handle archive files properly. It seems to find and open archive files without a problem but fails to search it for undefined symbols when compiling the final binary. Here's some more concrete details via an example:
Create a simple main.s file by compiling:
int foo();
int main() { return foo(); }
with Clang and using --target=aarch64 -S, i.e. to emit an architecture specific assembly file.
Do the same to create foo.s by compiling:
int foo() { return 0; }
Assemble both .s files to .o files using custom gas build from binutils.
Create an archive libfoo.a by running custom ar and ranlib builds from binutils using ar cr foo.a foo.s and ranlib libfoo.a.
Try to link the two files together using ld -L. -lfoo -o main.elf main.o.
The output shows an undefined reference to foo.
bar.cpp:(.text+0x14): undefined reference to `foo()'
I can run readelf on foo.a and it seems perfectly well formed. In particular I can see the public symbol for foo. If I run ld with --verbose then I see
attempt to open ./libfoo.a succeeded
but no indication that it's found any symbols.
If I link directly with the object files, i.e. using ld -o main.elf foo.o main.o then I get a well formed elf file. Furthermore, if I extract foo.o back out of libfoo.a then I can also link succesfully with the extracted foo.o.
Does anyone have any ideas why this might be happening? I'm pretty sure my clang build is okay as the emitted assembly files effectively decouple the problem from clang. I'd guess also that the assembled .o files are fine. So this only leaves ar/ranlib or ld as the culprit.
To try to eliminate ar/ranlib as the culprits I rebuilt binutils for arm (same steps but using --target=arm-none-eabi). If I integrate the built ar/ranlib binaries into a known good arm-eabi-none GCC toolchain then they seem to work correctly. My suspicions thus point to ld at the moment. Note that I get the same problem with the main/foo example as above if I use my Clang build with the arm build of binutils.
I also tried integrating the arm version of ld into the existing known good GCC toolchain, but got:
this linker was not configured to use sysroots
Need to figure that out. That's still a bit cryptic for me right now. It prevents me from sanity checking the arm ld build.
It might well be that I'm doing something obviously wrong, but right now I don't see it.
Notes:
I had to hack the make script for binutils to add a couple of -Wno-xxx flags. I need to do this the "correct" way, but I don't think this hack should affect the output.
I'm building/hosting all tools on OSX 10.9.4 64-bit.
UPDATE:
The error about sysroots is simple. The pre-existing toolchain was configured with flags:
--with-prefix $SOME_INSTALL_DIR
--with-sysroot $SOME_INSTALL_DIR/arm-none-eabi
If I rebuild with these then I can slot in my ld version without a problem and it works. I got excited thinking that this might have been my mistake. Previously I wasn't running make install as I didn't really care about installing at this stage. I thought perhaps my new build of ld was referencing OSX system libs/exes somehow (although I'm not sure exactly what it could reference, other than perhaps ar, i.e. if it runs ar to handle archives). However I still have the same problem with both the arm and arrach64 versions of binutils even when configured like this.
Looks like I was dramatically overthinking the problem and also missing something I'd never quite realised about ld. The issue was with the line:
ld -L. -lfoo -o main.elf main.o
I hadn't realise that ld is sensitive to object file/library ordering. The undefined reference to foo() was in main.o. I specified the library libfoo.a as input before main.o is parsed. This means that the linker doesn't bother searching libfoo.a for the symbol as it's already processed this library. main.o must be specified before, e.g.
ld -L. main.o -lfoo -o main.elf
Even wrapping -lfoo with --start-group ... --end-group doesn't fix this.
I'm still feeling somewhat surprised by this. And I've yet to convince myself it's a good feature of ld. Seems like the only use case I can think of right now is to sneakily allow duplicate symbols.
Time for a cup of tea!

Listing the dependencies of a shared library in Solaris

I am converting a set of static libraries to shared libraries and was able to create the shared libraries successfully. The problem is with the exe's because linking with static library can have unresolved symbols in the library but that is not the case with shared libraries. All the symbols in the shared library should get resolved.
Example:
PROG1 calls LIB1.a calls LIB2.a
Now the make file of PROG1 need not have LIB2.a as PROG1 calls to LIB1.a do not result in calling LIB2.a .So some LIB2.a symbols in LIB1.a can remain unresolved.
After conversion
Both LIB1.so and LIB2.so have to be included in the makefile of PROG1. Including LIB2.so resolves few linkage issues of LIB1.so but new issues appear due to inclusion of LIB2.so(as it may be depend on LIB3.so)
SO is there any way to find out the all the dependent libraries of a shared library?
I tried using ldd but it prints nothing.
Please let me know if my analysis is wrong.
This is a slightly personal opinion, but I think you should link your shared libraries so that you get an error for unresolved symbols (with -z defs). That means you sort out each library independently and don't get any nasty surprises at link time.
Of course, this only works if your libraries are clean and don't contain recursive dependencies (which are probably a bad thing anyway) and you aren't trying to do dynamic loading where you can load any of impl_1.so, impl_2.so or impl3_.so to provide code for a client client.so at runtime. But it works well if all you have are link time dependencies.
Indeed, if you don't do this, and are using ld rather than cc to do the linking,, you'll get pretty much what you're seeing - no dependencies, and errors at linktime

How to created a shared library (dylib) using automake that JNI/JNA can use?

How do I convince LibTools to generate a library identical to what gcc does automatically?
This works if I do things explicitly:
gcc -o libclique.dylib -shared disc.c phylip.c Slist.c clique.c
cp libclique.dylib [JavaTestDir]/libclique.dylib
But if I do:
Makefile libclique.la (which is what automake generates)
cp .libs/libclique.1.dylib [JavaTestDir]/libclique.dylib
Java finds the library but can't find the entry point.
I read the "How to create a shared library (.so) in an automake script?" thread and it helped a lot. I got the dylib created with a -shared flag (according to the generated Makefile). But when I try to use it from Java Native Access I get a "symbol not found" error.
Looking at the libclique.la that is generated by Makefile it doesn't seem to have any critical information in it, just looks to be link overloads and moving things around for the convenience of subsequent C/C++ compiler steps (which I don't have), so I would expect libclique.1.dylib to be a functioning dynamic library.
I'm guessing that is where I'm going wrong, but, given that JNA links directly to a dylib and is not compiled with it (per the example in the discussion cited above), it seems all the subsequent compilation steps described in the LibTools manual are moot.
Note: I'm testing on a Mac, but I'm going to have to do this on Windows and Linux machines also, which is why I'm trying to put this into Automake.
Note2: I'm using Eclipse for my Java development and, yes, I did import the dylib.
Thanks
You should be building a plugin and in particular pass
libclique_la_LDFLAGS = -avoid-version -module -shared -export-dynamic
This way you tell libtool you want a dynamically loadable module rather than a shared library (which for ELF are the same thing, but for Mach-O are not.)

Compile shared library with link to other .so

I want to link an existing shared library (FlashRuntimeExtensions.so) to my C-code while compiling my own shared library. But whatever I try I always get the same error; that the file is in a wrong format. Does anybody have an idea on how to solve this?
Here is my compile command:
$ g++ -Wall ane.c FlashRuntimeExtensions.so -o aneObject
FlashRuntimeExtensions.so: could not read symbols: File in wrong format
collect2: ld gaf exit-status 1 terug
Your command line tries to generate x86 code and link it to ARM code using the native g++ available in your distribution.
This will not work. Use the Android NDK available here: http://developer.android.com/tools/sdk/ndk/index.html
The NDK includes a set of cross-toolchains (compilers, linkers, etc..) that can generate native ARM binaries on Linux, OS X, and Windows (with Cygwin) platforms.
In general .so will be linked using -l.
for example, pthread -lpthread we use.
gcc sample.c -o myoutput -lpthread
But as per #chill's statement, what you are doing in command is correct only.
I suggest you to refer the following link.
C++ Linker Error SDL Image - could not read symbols
It should be an architecture mismatch. I faced this problem once, I have solved it by building the libs in same target platform and it is obvious. If you are using linux or Unix like OS you can see that by file command and if you are using windows you can see that using Dependency Walker. You need to make sure that all the libs matches architecture.

Resources