Basics of compiling binary with no dependencies using gnu toolchain - c

I'm trying to make an audio file I create slow down using SoX, and although I can easily compile the source files on the linux machine I use regularly, I need to transfer the binary to another linux machine with limited permissions and memory. I tried to copy the binary from the usr/local/bin folder on my machine to the other one and it could not find function references.
Is there a standard way to compile binaries with no dependencies, and if not, how do I set up the SoX binary so that it sees the correct dependencies when I only have write privileges in a temp folder?

You can compile, adding the -static flag to the compilation options in the Makefile. But be aware of any differences in glibc versions between your two (or more) Linux workstations. You want to make sure that you compile on (or target for) the workstation with the older (or oldest) kernel, or your binary may not work due to dependencies on a newer kernel, which cannot be met by an older installation of Linux. So: basically, compile on your oldest machine for better results.

The most important thing you need to generate executables without dependencies is the static version of all libraries this executable will use. Usually, libraries are shares as well, meaning if they need to call another library's functions, they use shared linking. To not get 2nd-grade dependencies you need to compile all required libraries statically.

Related

How to run executable file a.out created in my laptop gcc environment in other laptops?

I have written a program code in c compiled and executed in gcc compiler. I want to share the executable file of program without sharing actual source code. Is there any way to share my program without revealing actual source code so that executable file could run on other computers with gcc compilers??
Is there any way to share my program without revealing actual source code so that executable file could run on other computers with gcc compilers?
TL;DR: yes, provided a greater degree of similarity than just having GCC. One simply copies the binary file and any needed auxiliary files to a compatible system and runs it.
In more detail
It is quite common to distribute compiled binaries without source code, for execution on machines other than the ones on which those binaries were built. This mode of distribution does present potential compatibility issues (as described below), but so does source distribution. In broad terms, you simply install (copy) the binaries and any needed supporting files to suitable locations on a compatible system and execute them. This is the manner of distribution for most commercial software.
Architecture dependence
Compiled binaries are certainly specific to a particular hardware architecture, or in certain special cases to a small, predetermined set of two or more architectures (e.g. old Mac universal binaries). You will not be able to run a binary on hardware too different from what it was built for, but "architecture" is quite a different thing from CPU model.
For example, there is a very wide range of CPUs that implement the x86_64 architecture. Most programs targeting that architecture will run on any such CPU. Indeed, the x86 architecture is similar enough to x86_64 that most programs built for x86 will also run on x86_64 (but not vise versa). It is possible to introduce finer-grained hardware dependency, but you do not generally get that by default.
Operating system dependence
Furthermore, most binaries are built to run in the context of a host operating system. You will not be able to run a binary on an operating system too different from the one it was built for.
For example, Linux binaries do not run (directly) on Windows. Windows binaries do not run (directly) on OS X. Etc.
Library dependence
Additionally, a program built against shared libraries require a compatible version of each required shared library to be available in the runtime environment. That does not necessarily have to be exactly the same version against which it was built; that depends on the library, which of its functions and data are used, and whether and how those changed over time.
You can sidestep this issue by linking every needed library statically, up to and including the C standard library, or by distributing shared libraries along with your binary. It's fairly common to just live with this issue, however, and therefore to support only a subset of all possible environments with your binary distribution(s).
Other
There is a veritable universe of other potential compatibility issues, but it's unlikely that any of them would catch you by surprise with respect to a program that you wrote yourself and want to distribute. For example, if you use nVidia CUDA in your program then it might require an nVidia GPU, but such a requirement would surely be well known to you.
Executable are often specific to the environment/machine they were created on. Even if the same processor/hardware is involved, there may be dependencies on libraries that may prevent executables from just running on other machines.
A program that uses only "standard libraries" and that links all libraries statically, does not need any other dependency (in the sense that all the code it need is in the binary itself or into OS libraries that -being part of the system itself- are already on the system).
You have to link the standard library statically. Otherwise it will only work if the version of the standard library for your compiler is installed in your OS by default (which you can't rely on, in general).

What is the point of using `-L` when there is `LD_LIBRARY_PATH`?

After reading this question, my first reaction was that the user is not seeing the error because he specifies the location of the library with -L.
However, apparently, the -L option only influences where the linker looks, and has no influence over where the loader looks when you try to run the compiled application.
My question then is what's the point of -L? Since you won't be able to run your binary unless you have the proper directories in LD_LIBRARY_PATH anyway, why not just put them there in the first place, and drop the -L, since the linker looks in LD_LIBRARY_PATH automatically?
It might be the case that you are cross-compiling and the linker is targeting a system other than your own. For instance, MinGW can be used to compile Windows binaries on Linux. Here -L will point to the DLLs needed for linking and LD_LIBRARY_PATH will point to any libraries needed by linker to run. This allows compiling and linking of different architectures, OS ABIs, or processor types.
It's also helpful when trying to build special targets. I might be case that one links a static version of program against a different static library. This is the first step in Linux From Scratch, where one creates a separate mini-environment on the main system to become a chroot jail.
Setting LD_LIBRARY_PATH will affect all the commands you run to build your code (including the compiler itself).
That's not desirable in general (e.g. you might not want your compiler to run debug/instrumented libraries while it compiles - it might even go as far as breaking your compiles).
Use -L to tell the compiler where to look, LD_LIBRARY_PATH to influence runtime linking.
Building the binary and running the binary are two completely independent and unrelated processes. You seem to suggest that the running environment should affect the building environment, i.e. you seem to be making an assumption that the code build in some setup (account, machine) will be later run in the same setup. I find this assumption rather strange. I'd even say that in most cases the building and the running are done in different environments. I would actually prefer my compilers not to derive any assumptions about future running environment from the environment these compilers are invoked in. Looking onto the LD_LIBRARY_PATH of the building environment would be a major no-no.
The other answers are all good, but one nobody has mentioned yet is static libraries. Most of the time when you use -L it's with a static library built locally in your build tree that you don't intent to install, and it has nothing to do with LD_LIBRARY_PATH.
Compilers on Solaris support the -R /runtime/path/to/some/libs that adds to the path where libraries are to be searched by the run-time linker. On Linux the same could be achieved with -Wl,-rpath,/runtime/path/to/some/libs. It passes the -rpath /runtime/path/to/some/libs option to ld. GNU ld also supports the -R /path/to/libs for compatibility with other ELF linkers but this should be avoided as -R is normally used to specify symbol files to GNU ld.

histogram function in ansi C program: GSL and/or others?

If I just want to use the gsl_histogram.h library from Gnu Scientific Library (GSL), can I copy it from an existing machine (Mac OS Snow Leopard) that has GSL installed to a different machine (Linux CentOS 5.7) that doesn't have GSL installed, and just use an #include <gls_histogram.h> statement in my c program? Would this work?
Or, do I have to go through the full install of GSL on the Linux box, even though I only need this one library?
Just copying a header gsl_histogram.h is not enough. Header states merely the interface that is exposed by this library. You would need to copy also binaries like *.so and *.a files, but it's hard to tell which ones to copy. So I think the you'd better just install it on your machine. It's pretty easy, just use this tutorial to find and install GSL package.
So there are surely a lot of libraries out there. However the particular one is Gnuplot. Using it you even do not need to compile the code, however you do need to read a bit of documentation. But luckily there is already a question about how to draw a histogram with Gnuplot on Stackoverflow: Histogram using gnuplot? It worth noting that Gnuplot is actually very powerful tool, so invested time into reading its documentation will certainly pay off.
You cannot copy libraries from OS and expect them to work unchanged.
OS X uses the Mach-O object file format while modern Linux systems use the ELF object file format. The usual ld.so(8) linker/loader will not know how to load the Mach-O format object files for your executable to execute. So you would need the Apple-provided ld.so(8) -- or whatever they call their loader. (It's been a while.)
Furthermore, the object files from OS X will be linked against the Apple-supplied libc, and require the corresponding symbols from the Apple-supplied library. You would also need to provide the Apple-provided libc on the Linux system. This C library would try to make system calls using the OS X system call numbers and calling conventions. I guarantee the system call numbers have changed and almost certainly calling conventions are different.
While the Linux kernel's binfmt_misc generic object loader can be used to teach the kernel how to load different object file formats, and the kernel's personality(2) system call can be used to select between different calling conventions, system call numbers, and so on, the amount of work required to make this work is nothing short of immense: the WINE Project has been working on exactly this issue (but with the Windows format COFF and supporting libraries) since 1993.
It would be easier to run:
apt-get install libgs0-dev
or whatever the equivalent is on your distribution of choice. If your distribution does not make it easily available, it would still be easier to compile and install the library by hand rather than try to make the OS X version work.

Static link intel CRT

I am compiling a C code using the intel compiler. I integrated icc with visual studio 2010. I want to generate an optimized executable which will run on a windows machine. It is actually a virtual machine in the cloud. I don't have a chance to install any redistributable library to the target machine. I want to statically link all the required libraries. How can I do this?
I suppose you meant icl since you're mentioning VS2010/Windows (icc would be Linux/Mac version): just selecting 'Multi-threaded (/MT)' under Project settings->Configuration properties->C/C++->Code Generation should work. It'll cause both MSVC and Intel runtime to be statically linked into app.
But then it also depends which other libraries are you using, it might not work for all. In that case you can check the dependencies with depends.exe (http://www.dependencywalker.com/) and copy them side-by-side with your .exe to target machine.
Try adding -i-static -static-libcxa to the final linkage.
This should force static linking for intel libraries only.
(You can also try -static as littleadv suggested in the comment, but this will produce a huge static executable with no shared libraries at all)
One more note: A simple workaround would be to copy the executable with the required shared libraries (those that do not exist at the host) to the same directory. Then set LD_LIBRARY_PATH=. before running your dynamically linked executable. This will force searching for libraries in the current directory as well as system directories.
EDIT: I just noticed you said "windows machine". The above is relevant to UNIX machines so probably not useful to you. I'll leave it here in case someone needs the information.

Glibc and uClibc side by side on one system

Is it possible to have glibc and uClibc based applications running side-by-side on one system?
Background: We have binary gcc based cross-compiler configured to link with uClibc. We have cross-compiled glibc with it. Now we want to build some applications so they will link with the glibc rather than uClibc. We don't want to rebuild the compiler.
There's no problem with glibc and uClibc living side-by-side with some programs linking to one and other programs linking to the other. However, there is a problem with additional libraries. Each shared library on your system will be built against either glibc or uClibc (using the corresponding headers, which define distinct ABIs for the standard library functions), so for example if both a glibc program and a uClibc program need ncurses, you'll need to have two versions of ncurses built, and have a way of ensuring that the correct one for the given program gets loaded at runtime. Alternatively, you could choose to only use one set of shared libraries, and use static libraries for programs linked to the other libc, but you'd still need to build your 2 sets of libraries.
Yes, it should be perfectly possible, but you might have to play around with LD_PRELOAD_PATH. If you are linking statically, change to dynamic linking.
It is nearly impossible to mix them in the same FHS, as the ABI and include dir are incompatible. However, you could install either of them in an directory offset, by tweaking dynamic-linker field in ELF and exploiting sysroot feature in gcc/binutils. An on going experiment is in Gentoo community[1], known as Prefix/libc.
http://wiki.gentoo.org/wiki/Prefix/libc

Resources