When building a compiler, one must specify Linux headers version and minumum supported kernel version, in addition to glibc version. And then there is actual kernel version and glibc version (with its own kernel headers version and minumum supported kernel version) on the target machine. I'm rather confused trying to understand how these versions go together.
Example 1: Assume I have system with glibc 2.13 built against kernel headers 3.14. Does that make any sense? How is it possible for glibc 2.13 (released in 2011) to use new kernel features from 3.14 (released in 2014)?
Example 2: Assume I have a compiler with glibc version newer than 2.13. Will compiled programs work on system with glibc 2.13? And if compiler's glibc version is older than 2.13?
Example 3: From https://sourceware.org/glibc/wiki/FAQ#What_version_of_the_Linux_kernel_headers_should_be_used.3F I understand that it's OK to use older kernel if it satisfies "minumum kernel version" used when compiling glibc. But I don't understand the passage The other way round (compiling the GNU C library with old kernel headers and running on a recent kernel) does not necessarily work as expected. For example you can't use new kernel features if you used old kernel headers to compile the GNU C library.. Is it the only thing that can happen to me? Won't it break something in glibc if the kernel is newer than at compile-time?
Example 4: Do more subtle differences in glibc settings (for example, linking an executable against glibc version 2.X compiled against kernel headers 3.Y with minimum supported kernel version 2.6.A and executing in on system with the same glibc 2.X, but compiled against kernel headers 3.Z with minumum supported kernel version 2.6.B) influence anything? I suspect they're not, but would like to be sure.
So many questions :) Thanks!
You can not easily (for whatever definition of the word) use newer kernel features with older versions of glibc. If you really need to, you can invoke system calls directly (using the syscall() library function) and dig whatever constant values and datastructures necessary from user-space kernel headers (the stuff which in the newer kernel is held under include/uapi). On the other hand, kernel developers usually promise not to break legacy features in newer kernels, so older glibc versions keep working as expected (well, almost).
Older programs still work with newer versions of glibc because glibc supports versioning of symbols (see here for some details: https://www.kernel.org/pub/software/libs/glibc/hjl/compat/). If your program is dynamically linked with newer version of glibc without special provisions (as described in the link above) you would not be able to run it with an older version of glibc libraries (dynamic linker will complain about unresolved symbols, as proper symbol versions will not be available).
I am a newbie at this and yesterday I installed Win 8.1 x64 so I would like to use the most suitable program for making my tasks in C/C++.
Thanks
MinGW supports only 32 bit binaries, TDM supports 32 and 64 bit binaries (with usage of MinGW's API's).
If you need a good GCC for Windows, with the drawback of 32 bit only,
use MinGW.
If you want to build 64 bit binaries too, you can use TDM.
Both released GCC 4.8 approximately at the same time, so there's no real difference in up-to-dateness.
My recommendation: Use the 3rd alternative: MinGW-w64 instead - it's a extended MinGW with support for 64 Bit. See here for a short description of MinGW-w64.
Whatever choice you make, better use official developers website for downloading (not Orwell's) to get most up-to-date version.
MinGW
MinGW-w64
TDM-GCC
Btw., you'll find a good overview there, what makes one special about the others.
NB: the homepage of mingw-w64 used to be on sourceforge but is now at http://mingw-w64.org ; links have been updated accordingly.
For those interested in 32-bit binaries:
Note the code::blocks IDE comes with MinGW, but the compiler is the 32-bit version of TDM-GCC. The TDM version has static runtime linkage by default which makes executables portable to systems without MinGW installed. The TDM-gcc compiler also seems to implement the latest gcc version faster than the other projects.
The MinGW distribution also doesn't use posix emulation to access threads in Windows (unlike MinGW64 or TDM64). There is a separate download source for the headers providing C++11 compliant and functionality for MinGW.
i want to compile/link on a new solaris version (libc.so SUNW_1.22.6) for a system with an older solaris (libc.so SUNW_1.22.4). How can I specify that the linker (on the new version) should build a binary that uses the older (1.22.4) libc.so?
In general, UNIX systems support backward compatibility (a program built on an older system continues to work on a newer system), but not the opposite: a program built on a newer system may not work on an older system.
For this reason, build your program on the oldest OS release you are going to support.
How can I specify that the linker (on
the new version) should build a binary
that uses the older (1.22.4) libc.so
You would need a "new Solaris -> old Solars" cross-compiler for that. GCC can be built for such cross-compilation, but this is not trivial. Building on an older system is usually much simpler approach.
Don't call any functions that aren't in SUNW_1.22.4. The linker records the minimum dependency based on the functions linked to.
Sorry if this is an obvious question, but I've found surprisingly few references on the web ...
I'm working with an API written in C by one of our business partners and supplied to us as a .so binary file, built on Fedora 11. We've been testing out the API on a Fedora 11 development machine with no problems. However, when I try to link against the API on our customer's target platform, which happens to be SuSE Enterprise 10.2, I get a "File format not recognized" error.
Commands that are also part of the binutils package, such as objdump or nm, give me the same file format error. The "file" command shows me:
ELF 64-bit LSB shared object, AMD x86-64, version 1 (SYSV), not stripped
and the "ldd" command shows:
ldd: warning: you do not have execution permission for `./libuscuavactivity.so.1.1'
./libuscuavactivity.so.1.1: /usr/lib64/libstdc++.so.6: version `GLIBCXX_3.4.9' not found (required by ./libuscuavactivity.so.1.1)
[dependent library list]
I'm guessing this is due to incompatibility between the C libraries on the two platforms, with the problem being that the code was compiled against a new version of glibc etc. than the one available on SuSE 10.2. I'm posting this question on the off chance that there is a way to compile the code on our partner's Fedora 11 platform in such a way that it will run on SuSE 10.2 as well.
I think the trick is to build on a flavour of linux with the oldest kernel and C library versions of any of the platforms you wish to support. In my job we build on Debian 4, which allows us to officially support Debian 4 and above, RedHat 3,4,5, SuSE 10 plus various other distros (SELinux etc.) in an unofficial fashion.
I suspect by building on a nice new version of linux, it becomes difficult to support people on older machines.
(edit) I should mention that we use the default compiler that comes with Debian 4, which I think is GCC 4.1.2. Installing newer compiler versions tends to make compatibility much worse.
Windows has it problems with compatability between different realeases, service packs, installed SDKs, and DLLs in general (DLL Hell, anyone?). Linux is not immune to the same kinds of issues.
The compatability issues I have seen include:
Runtime library changes
Link library changes
Kernel changes
Compiler technology changes (eg: pre and post EGCS gcc versions. This might be your issue).
Packager issues (RPM vs. APT)
In your particular case, I'd have them do a "gcc -v" on their system and report to you the gcc version number. Compare that to what you are using.
You might have to get hold of that version of the compiler to build your half with.
You can use Linux Application Checker tool ([1], [2], [3]) in order to solve compatibility problems of an application between Linux distributions. It will check your file formats and all dependent libraries. It supports almost all popular Linux distributions including all versions of SuSE and Fedora.
This is just a personal opinion, but when distributing something in binary-only form on Linux, you have a few options:
Build the gamut of .debs and .rpms for every distro under the sun, with a nominal ".tar.gz full of binaries" package for anything you've missed. The first part is ideal but cumbersome. The latter part will lead you to point 2 and 3.
Do as some are suggesting and find the oldest distro you can find and build there. My own opinion is this is sort of a ridiculous idea. See point 3.
Distribute binaries, and statically link where ever you can. Especially for libstdc++, which appears to be your problem here. There are seemingly very many incompatible versions of libstdc++ floating around, which makes it a compatibility nightmare. If you can't link statically, you can also put *.so files alongside your binary, and use stuff like LD_PRELOAD or LD_LIBRARY_PATH to make them link preferentially at runtime. Note that if you take this route you may have to comply with LGPL etc. since you are now distributing other people's work alongside your project.
Of course, distributing your project in source form is always preferred on Linux. :-)
If the message is file format not recognized then the problem is most likely one mentioned by elmarco in a comment -- namely, different architecture. It might (I'm not sure) be a dynamic linker version mismatch, but that would mean the .so file was built with an ancient dynamic linker. I do not believe any incompatibility in libc could cause this -- they could cause link failures and runtime problems (latter very rarely), but not this.
I don't know about Suse, but I know fedora likes to stay on the bleeding edge. So you may very well be right about library versions. Why don't you ask and see if you can get the source code and build it on your Suse machine?
I know microsoft recommends against linking to the msvcrt.dll, so please spare me from that warning. They do it all the time in their software (like WinDbg) and they won't introduce breaking changes since all VC6 apps link against msvcrt.dll.
Linking against msvcrt.dll has several benefits. Small executable, easy deployment: msvcrt is there since win98 and I don't have to bundle few MB C runtime with my installer.
Now, is it possible to use gcc to link agains the C library in msvcrt.dll instead of glibc? If yes, how?
Thanks!
AFAIK the MinGW port for gcc does link your program to msvcrt.dll.