I don't understand the difference between the two types of libraries, and many websites say that they are the same thing, but at school we use two different commands to create them
dynamic library
$ gcc -shared -o libsample.so lib.c
$ gcc -o main main.c -ldl
to execute:
$ ./main ./libsample.so
shared library
$ gcc -shared -o libsample.so lib.c
$ gcc -o main main.c -L. -lsample
to execute:
$ LD_LIBRARY_PATH=. ./main
Can someone help me in understanding the difference between the two "codes"?
Dynamic Linked Library (.DLL) is the terminology used by Microsoft Windows. Shared Object (.so) is the terminology used by Unix and Linux.
Other than that, conceptually they're the same.
Regarding your snippets of commands, I guess the difference (and I'm only guessing here, because you didn't show us the relevant parts) is how the library is loaded. There is "link time loading" where the library is tied to the executable by the linker¹. And there is "runtime loading", where the program sort of "ingests" the dynamic/shared library.
runtime loading is done in Windows with the LoadLibrary (there's an …A and a …W variant) function, and on Unix/Linux with dlopen (which is made available by libdl which is linked to by that -ldl library link statement).
1: The linker is the program that creates the actually executable file from the intermediary objects created by the various compiler stages.
Dynamic and shared libraries are usually the same. But in your case, it looks as if you are doing something special.
In the shared library case, you specify the shared library at compile-time. When the app is started, the operating system will load the shared library before the application starts.
In the dynamic libary case, the library is not specified at compile-time, so it's not loaded by the operating system. Instead, your application will contain some code to load the library.
The first case is the normal case. The second case is a special use and it's mainly relevant if your application support extensions such a plug-ins. The dynamic loading is required because there can be many plug-ins and they are built after your application. So their names are not available at compile-time.
Related
I have a shared library (*.so) created using Real View Compiler Tools (RVCT 3.2) on windows target. Then I try to link this *.so file with my application using gcc on linux system.
What is the gcc option to link this shared library with my application linux?
My question is, is the -shared option, which is used as
gcc -shared myfile.so
..., used to create the SO file or to link the SO file? I believe it creates something like:
gcc -lmyfile.so
Is this enough? Or is there any other switch to tell the linker that it's a dynamic library (shared object)?
What worked for me was:
gcc -L. -l:myfile.so
gcc -lmyfile should be enough (provided that your library is named libmyfile.so). The linker searches for shared objects when possible and AFAIK prefers them.
I have a conceptual question about writing a library in plain c. I have some functions that I have to use in different programs in the same folder, so I was thinking about writing a library to host these functions. I have to write the whole code in a folder that will be copied to another computer (where the programs will run). If I create and compile the library in this folder, will be the user able to run the programs without rebuilding the library from source or he might have some unpredictable errors? The user will build the programs that use the library anyway, he won't build the lib itself.
Thanks
Lorenzo
In general, no, it is not portable in the sense that a compiled library can be linked on an arbitrary other system. The compiled library has to be compatible to the target architecture, the OS, the compiler system, to name some.
But you have another choice, concluded from your comment: It seems that you also provide some shell script or makefile to build the programs.
Because a library consists of "just" a set of compiled translation units before some of them get linked into the programs, you can take the set of sources of these translations unit and compile them with the sources of each program, where appropriate.
As an example, let's say you have 2 functions (each in its own source file) you use in different combinations in 3 programs. "prg1" uses func1(), "prg2" uses func2(), and "prg3" uses both.
This can be the commands to build the programs with a (static) library:
gcc -c func1.c -o func1.o
gcc -c func2.c -o func2.o
ar -r lib.a func1.o func2.o
gcc prg1.c lib.a -o prg1
gcc prg2.c lib.a -o prg2
gcc prg3.c lib.a -o prg3
Instead of the library you compile the programs' sources directly:
gcc prg1.c func1.c -o prg1
gcc prg2.c func2.c -o prg2
gcc prg3.c func1.c func2.c -o prg3
The results are the same, at least as long as you had linked statically to the library.
But even with a shared (dynamic) library the approach will be the same. Shared libraries "only" save some RAM if several programs using them are run concurrently. If only one program runs at a time, a dynamically linked program might need more RAM and loads slower.
TL;DR - I need to compile archive.a with test.o to make an executable.
Background - I am trying to call a function in a separate library from a software package I am modifying but the function (a string parser) is creating a segmentation violation. The failure is definitely happening in the library and the developer has asked for a test case where the error occurs. Rather than having him try to compile the rather large software package that I'm working on I'd rather just send him a simple program that calls the appropriate function (hopefully dying at the same place). His library makes use of several system libraries as well (lapack, cblas, etc.) so the linking needs to hit everything I think.
I can link to the .o files that are created when I make his library but of course they don't link to the appropriate system libraries.
This seems like it should be straight forward, but it's got me all flummoxed.
The .a extension indicates that it is a static library. So in order to link against it you can use the switches for the linking stage:
gcc -o myprog -L<path to your library> main.o ... -larchive
Generally you use -L to add the path where libraries are stored (unless it is in the current directory) and you use -l<libname> to sepecify a library. The libraryname is without extension. If the library is named libarchive.a you would still give -larchive.
If you want to specify the full name of the library, then you would use i.e. -l:libname.a
update
If the libraypath is /usr/lib/libmylibrary.a you would use
-L/usr/lib -lmylibrary
How do I convince LibTools to generate a library identical to what gcc does automatically?
This works if I do things explicitly:
gcc -o libclique.dylib -shared disc.c phylip.c Slist.c clique.c
cp libclique.dylib [JavaTestDir]/libclique.dylib
But if I do:
Makefile libclique.la (which is what automake generates)
cp .libs/libclique.1.dylib [JavaTestDir]/libclique.dylib
Java finds the library but can't find the entry point.
I read the "How to create a shared library (.so) in an automake script?" thread and it helped a lot. I got the dylib created with a -shared flag (according to the generated Makefile). But when I try to use it from Java Native Access I get a "symbol not found" error.
Looking at the libclique.la that is generated by Makefile it doesn't seem to have any critical information in it, just looks to be link overloads and moving things around for the convenience of subsequent C/C++ compiler steps (which I don't have), so I would expect libclique.1.dylib to be a functioning dynamic library.
I'm guessing that is where I'm going wrong, but, given that JNA links directly to a dylib and is not compiled with it (per the example in the discussion cited above), it seems all the subsequent compilation steps described in the LibTools manual are moot.
Note: I'm testing on a Mac, but I'm going to have to do this on Windows and Linux machines also, which is why I'm trying to put this into Automake.
Note2: I'm using Eclipse for my Java development and, yes, I did import the dylib.
Thanks
You should be building a plugin and in particular pass
libclique_la_LDFLAGS = -avoid-version -module -shared -export-dynamic
This way you tell libtool you want a dynamically loadable module rather than a shared library (which for ELF are the same thing, but for Mach-O are not.)
I've known that I should use -l option for liking objects using GCC.
that is gcc -o test test.c -L./ -lmy
But I found that "gcc -o test2 test.c libmy.so" is working, too.
When I use readelf for those two executable I can't find any difference.
Then why people use -l option for linking objects? Does it have any advantage?
Because you may have either a static or a shared version of the library in your library directory, e. g. libmy.a and libmy.so, or both of them. This is more relevant to system libraries: if you link to libraries in your local build tree, you know which version you build, static or shared, but you may not know other systems' configuration and libraries mix.
In addition to that, some platforms may have different suffixes. So it's better to specify it in a canonical way.
The main reason is, -lname will search for libname.a (or libname.so, etc.) on the library search list. You can add directories to the library search list with the -L option. It's a convenience built into the compiler driver program, making it easier to find libraries that have been installed in standard places on the system.