I'm deploying a small program compiled with gcc, 4.3.2-1.1 (Debian). This program will be deployed on virtual machine templates ranging from Debain 5 to bleeding edge Fedora, Ubuntu, Slackware, Arch and others.
The program depends on some symbols from Xen's libraries which are only available in an unstable tree. Hence, installing Xen's libraries via respective package managers on the virtual machine templates would not solve my immediate issue.
Until I package my own version of these libraries, I need to statically link the executable.
Does gcc 4.3-x, by default only include symbols that are actually used when statically linking, or is there another optimization flag that I should be passing to the linker? I know that statically linking is bad, I'm doing it only as a temporary work around.
This issue is related not only to gcc, but to ld(1) too.
By default, gcc doesn't eliminate dead code, you can check this by compiling/linking executable, and then running
objdump -d a.out
which shows you all functions in your executable.
Simple "googling" give this link.
So, to remove unused functions, you need:
Compile with “-fdata-sections” to keep the data in separate data sections and “-ffunction-sections” to keep functions in separate sections, so they (data and functions) can be discarded if unused.
Link with “--gc-sections” to remove unused sections.
Related
My question is in relation to .so shared libraries. I am building a project that uses cmake on one ubuntu machine but running the application on another ubuntu machine.
In the CMakeLists.txt file, I have the following lines:
project (clientapp)
add_executable(${PROJECT_NAME} ${SOURCES} ${WAKAAMA_SOURCES} ${SHARED_SOURCES})
LINK_DIRECTORIES(/home/user//mraa-master-built/build/src)
target_link_libraries (clientapp libmraa.so)
target_link_libraries(clientapp m)
These lines add two libraries libmraa.so and the math library to the executable and it runs successfully on the other machine.
My understanding of shared libraries is that they must be present at compile time, and when the application starts. But I do not have the libmraa.so file on the other machine and the application runs ok. I expected it not to work.
Is my assumption correct?
In general, gcc and clang support lazy linking/binding of symbols, but not for entire libraries. This means that all of the shared objects (ie: .so files) should be present at application startup, at a minimum. The one exception to this is if you modified your makefile to not link against these libraries, and you manually call library functions via dlopen()/dlsym(), etc.
The binding of individual symbols within those libraries can be postponed until they are needed, or you can force all the symbols to be resolved at startup, using -z lazy or -z now, respectively.
It is strange that your application runs without libmraa.so being present. The two most likely reasons your application is running in the absence of the library is:
Your application isn't using any symbols defined in the library, so the linker ignores the library at build time (try ldd app_name and see if your library is present in the list of libraries provided by ldd).
Something is amiss in your build script, and you are statically linking against a .a archive of the library.
Edit: In response to how the application knows how to find the library, your linker (ld in this case) will use rpath lookup to decide which directories to use in its search for the appropriate library. You can see how this works by doing something like LD_DEBUG=libs app_name from the command line. You can also add an extra path via LD_LIBRARY_PATH=/some/path app_name.
Is my assumption correct?
Yes.
There are two likely explanations for why the application runs anyway:
You are mistaken, and there is libmraa.so somewhere on the machine (though perhaps not in the place where you looked), or
Your compiler defaults to -Wl,--as-needed by default, and your binary does not in fact depend on libmraa.so despite the fact that it appears on your link line.
You can trivially confirm or disprove either of the above guesses.
To confirm guess 2, do this:
readelf -d clientapp | grep NEED | grep libmraa
# if there is no output, guess 2 is correct
If guess 2 is wrong, to confirm guess 1, do this (on machine without libmrra.so):
ldd clientapp | grep libmraa.so
# if guess 2 is incorrect, and this command produces no output, then
# your dynamic loader is broken, which is very unlikely.
I'm using Code::Blocks for a project. I have not used an IDE on Linux in years, so I'm a bit out of touch with Linux IDEs.
I'm working with an OpenSSL project that uses FIPS validated library. I duplicated the GCC compiler toolchain and modified it to use OpenSSL's fipsld (and set it as default).
When the project's code executes under Code::Blocks via F8, FIPS_mode_set fails with error 252104805 (0xF06D065). 0xF06D065 is:
$ openssl errstr 0xF06D065
error:0F06D065:common libcrypto routines:FIPS_mode_set:fips mode not supported
which tells me Code::Blocks is not using the OpenSSL I specified in /usr/local/ssl/lib. Rather, the program is using the non-FIPS library provided by Debian in /usr/lib/x86_64-linux-gnu/.
An image of the link library settings is below. Note that the libraries are fully specified, and nothing is left to chance.
CodeBlocks is clearly doing things with LD_LIBRARY_PATH (shown below).
I've also verified the project is using the correct search directories - /usr/local/ssl/include for headers and /usr/local/ssl/lib for the linker.
With compiler logging set to "Full Command Line" set, here's what I get from the build log:
-------------- Build: Debug in ac ---------------
Compiling: main.cpp
/home/jwalton/Desktop/ac/main.cpp:8:5: warning: unused parameter ‘argc’ [-Wunused-parameter]
/home/jwalton/Desktop/ac/main.cpp:8:5: warning: unused parameter ‘argv’ [-Wunused-parameter]
Linking console executable: bin/Debug/ac
Output size is 569.67 KB
Process terminated with status 0 (0 minutes, 0 seconds)
0 errors, 2 warnings
I'm aware of Basile Starynkevitch's suggestions on rpath's and LD_PRELOAD tricks, but this seems like one of those things the IDE should be handling for me (Visual Studio will handle it properly, and even gives us an input box to set Working Directories to find additional libraries).
Any ideas how to make Code::Blocks use the shared objects in /usr/local/ssl/lib when executing the program under the debugger?
Your IDE instructs the compiler to link against the specified libraries, but not to load them at run time. For this latter thing to happen, you need to pass another option to the linker, namely
-rpath=/path/to/directory/with/your/libraries
or, if the linker is invoked by the compiler,
-Wl,-rpath=/same/thing
Code::Blocks don't use shared objects (DLL are a Windows thing). Because Code::Blocks is simply an IDE. IDEs are glorified source code editors with the ability to run external software development tools. You could (and sometimes you should, at least to learn how things happen) edit your code with a plain good editor like emacs, and build it with commands. Your IDE is just running commands, notably a compiler and a linker, probably using gcc
So what is using shared objects in /usr/local/ssl/lib/ is the compiler and linker (and the runtime dynamic linker). BTW, /usr/local/ssl/lib/ is a very strange name for a directory containing shared objects; you should have configured OpenSSL to be installed in /usr/local/lib/ !
First, I really believe you should reconfigure and recompile and rebuild and reinstall your SSL to get it installed under /usr/local/ (or perhaps /opt/) prefix (i.e. shared libraries in /usr/local/lib).
Then you could add appropriate options for the ld linker (from binutils). You probably want -L/usr/local/ssl/lib (to the gcc command which is running ld), and you may want to pass -Wl,-rpath (see this).
I would suggest to reinstall your SSL in /usr/local/, add /usr/local/lib/ into /etc/ld.so.conf (or at least into your LD_LIBRARY_PATH...) and run ldconfig
Otherwise, add at least /usr/local/ssl/lib/ in front of your LD_LIBRARY_PATH (and also -L/usr/local/ssl/lib/ to your linking command).
Read Program Library HowTo, the answers to this, and Drepper's How To Write Shared libraries paper.
Just open the terminal and type
export LD_LIBRARY_PATH=/path/to/your/libraries
sudo ldconfig
How do I convince LibTools to generate a library identical to what gcc does automatically?
This works if I do things explicitly:
gcc -o libclique.dylib -shared disc.c phylip.c Slist.c clique.c
cp libclique.dylib [JavaTestDir]/libclique.dylib
But if I do:
Makefile libclique.la (which is what automake generates)
cp .libs/libclique.1.dylib [JavaTestDir]/libclique.dylib
Java finds the library but can't find the entry point.
I read the "How to create a shared library (.so) in an automake script?" thread and it helped a lot. I got the dylib created with a -shared flag (according to the generated Makefile). But when I try to use it from Java Native Access I get a "symbol not found" error.
Looking at the libclique.la that is generated by Makefile it doesn't seem to have any critical information in it, just looks to be link overloads and moving things around for the convenience of subsequent C/C++ compiler steps (which I don't have), so I would expect libclique.1.dylib to be a functioning dynamic library.
I'm guessing that is where I'm going wrong, but, given that JNA links directly to a dylib and is not compiled with it (per the example in the discussion cited above), it seems all the subsequent compilation steps described in the LibTools manual are moot.
Note: I'm testing on a Mac, but I'm going to have to do this on Windows and Linux machines also, which is why I'm trying to put this into Automake.
Note2: I'm using Eclipse for my Java development and, yes, I did import the dylib.
Thanks
You should be building a plugin and in particular pass
libclique_la_LDFLAGS = -avoid-version -module -shared -export-dynamic
This way you tell libtool you want a dynamically loadable module rather than a shared library (which for ELF are the same thing, but for Mach-O are not.)
I am looking for a program to create a C-library (i.e. to link and compile the files into one file) based on .h-files and .c-files that have the following structure (it is a FEC-library: www.openfec.org ). I am using Ubuntu. I want it to do this without manually specifying each files. I tried WAF, but got the error 'ERROR:root: error: No module named cflags'.
Here is (part of the) the structure:
fec
lib_advanced
ldpc_from_file
of_code_profile.h
of_ldpc_ff.h
...
lib_common
linear_binary_codes_utils
binary_matrix
it_decoding
ml_decoding
statistics
of_cb.h
of_debug.h
of_mem.c
of_mem.h
of_openfec_api.c
of_openfec_api.h
of_openfec_profile.h
of_types.h
Thanks!!
You have to use gcc to compile the C-files to objects files.
Then you have to use ar r and then ranlib to pack the objects into one .a library file.
C libraries on *nix systems (including all linux distros) are created with standards tools, this tools being a) a C compiler b) a linker b) the ar utility c) the ranlib utility.
The C compiler 99.9% of the time the GNU C compiler, while the linker ld, and the utilities ar and ranlib are part of the binutils package on gnu systems (99.9% of linux systems).
ar and ranlib are used to created static libraries, putting already compiled object files ( *.o files) in an archive file libsomething.a with ar and indexing the archive with ranlib.
The linker can be called inside the gcc compiler to create dynamic libraries with position independent code, again the already compiled files are archived on a special, this file has the .so extension for shared object.
Static Libraries are used for speed and self-containment, they produce big executables which contain all their dependencies inside the final executable. If a single library of many changes, to update it you'll have to recompiled everything.
Dynamic libraries are compiled and linked separately of the executables, they can be used simultaneously by multiple executables, if one library is updated, you just need to recompile a single library and not every executable which depends on it.
The use of this tools are universal and standard procedure, they can vary by few details from *nix systems to *nix system, but on linux you essentially always use the GCC and Binutils packages. Extra build utilities on the form of make, cmake, autotools, etc exist to help on the process, but the basic tools are always used.
Generally on the most basic level you write a Makefile script which is interpreted by the make utility. And depending on your commands it can make one or both kinds of libraries, install the library and executables, uninstall them, clean up, etc
For more information :
http://www.yolinux.com/TUTORIALS/LibraryArchives-StaticAndDynamic.html
http://www.dwheeler.com/program-library/Program-Library-HOWTO/
Consider a field case, where we won't provide the image built with gdb flags.
Now is there any link or documentation or any such similar stuffs which helps in
debugging the core file generated in the field.(Remember the image is not built with -g gdb flag).
Some pointers would be really useful !!
An even better solution is to always build your program with -g (which at least for GCC does not inhibit optimization). Then you can use objcopy to create separate debug files which you do not ship with the product, and stripped binaries which you do ship.
Then when you load a core from the field on a development machine, where the debug symbols are present, GDB will load the debug symbols from the separate files. In the field, the debug symbol files are not present, since you didn't ship them, so the debug info is not available.
If applicable, you can also create a DVD or USB key with the symbol files so that a technician can bring symbols with them to analyze a core file on-site.
You need to build your executable with -g (you may also specify -O). You then ship a stripped version of the executable (man strip). Any core file will be compatible with either version.