Compatibility of dynamic lib(.so) version - c

I have a software compiled and running on centos 5, Now i am intrested in running it on Centos 6, unmodified without compilation on new machine.
Here is the challenge:-
My process requires certain lib say libcap.so.1 but centos 6 has newer version say libcap.so.2
I am able to make old software work with any of below hacks
create a symlink(libcap.so.1) to libcap.so.2
copy libcap.so.1 to new machine(centos 6)
What is recommended between two and are there any known issues with this approach, compiling on centos6 would be my last option.

Creating a sym link like that will sometimes work, depending on what exactly the changes were that led to the new version - they would have to be minimal changes that especially did not remove access to any routines/variables that the old library had or drastically change the signature of any of the routines/variables that remain. If the library uses symbol versioning (like the gcc standard C library does), it's possible the .2 version still contains the full API the previous version provided, but I don't know off the top of my head if libpcap does that...
If the new library does not encapsulate the old API, your safest bet is to recompile. If your application is not mission-critical, and/or you can deal with crashes and the possibility of corrupted data, it shouldn't hurt (much) to just try the sym link route...

Related

Do ubuntu, gcc later version cover older versions?

My professor is using automated scoring program for my programming assignment. It is C programming handling some file stuff.
He asks students to use Ubuntu version 18.xx and gcc 7.xx. and I asked him if I can use the later version of those, which are 20.xx, 9.xx respectively. and he's not so sure about it with saying that might not be the problem but just in case use the exact version.
I don't want to delete the current Ubuntu, gcc and re-downloaded the exact version he mentioned, because it might take some time and I have to keep using this laptop for many more assignments from other classes. I want to use the later version (seems like quite a big gap between 18.xx, 7.xx and 20.xx, 9.xx)!
Are there any potential problems for using my current version?
It would also be possible to check the default standard of the gcc version used by your teacher and use the corresponding std flag : "-std=c11" for example.
(I'm unfortunately in a rush and can't check it yet but when I have more time, I can try to look it up if you still didn't get the information)
It may also be possible that he already uses a specific standard, in that case it's better to ask him.
Bugs might arise but I think it would be pretty negligible if you use the same standard.
Otherwise if you are using a Debian based distro you can also install his gcc version alongside your actual one and use them with "update-alternatives".
If not you can use a Docker container.
Something to look for is also if you are going to use OS specific library or functions : For example Windows is not POSIX compliant so I had to use WSL but linux (and mac aswell if I remember correctly) are.

Avoiding too specific dependencies

I am using a shared C library on Linux that is distributed in binary form. The problem is that the dependencies are set to require exactly the versions available on the development machine. For example, each release requires the (at the time) latest glibc and only the exact version of libreadline on their system.
I have contacted the developers and they don't know what to do about this. As far as I can tell, they are not consciously using the latest features, so the library should continue to work with older dependencies. I think they are using gcc on Linux, but they are also using a complex make system to control other compilers to build for Windows and Unix.
How and to what extent can you manage the build process so that a library requires dependencies just of a sufficient version and will also accept later versions?
This was a related question.
Edit: To be clear, I want to know how to build programs so they will accept dependencies with a specific version number or later numbers. Whether the developers compile it or I do, I want to be able to distribute a binary that does not require exactly the versions of dependencies present in the build environment.
Edit 2: After rephrasing the question, I realized this has been covered many times before. Some of the best Q&A:
Deploying Yesod to Heroku, can't build statically
Compile with older libc
Linking against an old version of libc
How can I link to a specific glibc version?
It's not very confidence inspiring. They should be building on a stable baseline release, it could just be a virtual install. Some versions of Linux, copy a build environment so packages aren't linked to updated library versions.
The openSUSE build service, lets devolopers build binary packages, for a wide variety of http://openbuildservice.org/about/
IIRC readline is a GPL program and checking at http://cnswww.cns.cwru.edu/php/chet/readline/rltop.html#Availability suggests it is GPL v 3 so they may be in violation of the GPL, if they are using libreadline functions and should provide you with the source to their library. I am not sure if you are meaning rpm/apt package dependencies, or their library is actually calling libreadline.
You can always extract files from rpm or apt packages, if necessary so avoiding software manager issues, caused by poor packaging.

How can I compile a Linux executable for a different machine?

I've written a Linux program in C, and I'm trying to get it to run on a server system. It looks like everything should work, but when I try it, I get this:
/lib64/libc.so.6: version `GLIBC_2.14' not found (required by <program>)
/lib64/libc.so.6: version `GLIBC_2.14' not found (required by ./libdbi.so.1)
(Where <program> is my program's name.)
So far as I can tell, my program only requires that version of GLIBC because libdbi does. I've tried compiling libdbi from source, and it still attempts to link to that version of GLIBC.
I don't own the server system (it's a shared system I run a website on, and have SSH access to), so I can't make any changes to it -- that's why the library file is in the same directory, and I've set LD_LIBRARY_PATH=.. Unfortunately I also don't have access to a compiler on it -- when I try to run GCC, I'm told "permission denied". It's run by a big corporation, and I'm only one customer; the chances of them making any changes at my request are essentially zero.
Is there any way to compile the program on my system so that it will work on the server?
Before I asked, I found these similar questions:
Compile C program in Linux with different glibc library: the link in the answer goes to a 404 page, and from what I've been able to determine, apgcc isn't available on Debian distributions.
Relink a shared library to a different version of libc: seems to say that this problem doesn't exist, because "glibc tend to be backwards compatible" (except they apparently aren't in this case).
How to compile Linux C program to run on another Linux machine?: suggests a chroot or virtual machine, which I've done before elsewhere, but how can I tell it to use a libc without that old GLIBC version?
is binary executable file portable: suggests static-linking, but libdbi dynamically-links to its driver files, so that apparently can't be done -- I get several errors referring to missing functions like ldopen.
There are others, but they seem to be variations on those.
I'd be willing to use a non-free solution (like one that I saw in another answer I can't find now) if I turn this into a commercial product, but for a single use it seems like massive overkill, not to mention the expense.
Is there any way to simply tell libdbi to link to a later GLIBC version, maybe? If not, is there any solution I've overlooked?
Big corporation or not, the least they owe you if you are paying for service in any way or being paid for development to meet a requirement is a careful description of the runtime environment so you can duplicate it on a development machine.
Then you must set out to systematically duplicate this environment. Since you're using libdbi you should be thorough. Database connections can exercise big chunks of the system API, so you want to have exactly the same version of Linux, gcc (even if you can't run it, you need to know the version other parts of the system were compiled with), and other tools and libraries. If you don't, you won't be able to have much confidence that your development machine tests translate to good behavior on the target.
A virtual machine is a good way to create a specialized development environment without messing up your existing one.
You must compile it on a machine that has the same version of glibc as the target machine, or an older version. shared library compatibility works in that direction only.
Find out what version of Linux the server uses, get a copy of it and install it in a VM
Virtualbox is good for this
You can use this environment for testing code as well as this particular compilation problem
You have the following options:
Compile your code on the server machine (which likely has gcc installed)
Compile your program with statically linked libraries (option -static for gcc)

Shared libraries and binaries in C

I took over a fairly large C code. There are lots of legacy binaries that are requiring old version shared libraries. The server has never versions of those exact libraries. I could recompile or setup symbolic links that will connect older versions to new. Setting up symbolic links will take some time - is there any standard or smart way to do this? I am new to this and would appreciate any tips. This is all C and FreeBSD environment.
Thanks.
In general when updating legacy code with new libraries, it is best to perform a check by recompiling the source code against the new libraries and their includes. This will allow you to use the compiler to check for inconsistencies between the old and new libraries in areas such as data types, function signatures, etc.
By recompiling you also are able to check that the new libraries provide all of the dependencies that you need.
Finally, doing a recompile will help you check that you are in fact able to recompile and link everything and have all of the necessary components.
I would feel uncomfortable tying to take a short cut such as using symbolic links.
The shared-library version number is only supposed to be changed when the ABI changes. (Old versions of FreeBSD didn't quite get this right, and it's fixed in more recent versions but only for system libraries!) So the only way to make those applications work properly is to either recompile them, or supply the exact version of the shared library that they were linked against. For programs that only depend on old versions of the FreeBSD system libraries, you can installes the compat[45678]x packages, which provide the versions of the libraries supplied with the specified version of the OS -- but there are significant pitfalls:
1) If some of the libraries your application depends on are linked against newer versions of the standard libraries than your application itself is, the dynamic linker will give you two incompatible copies of the standard library, and things are not likely to work.
2) If your application loads external modules or plug-ins using dlopen(), all bets are off, because these modules are not versioned.
FreeBSD 8 and newer use symbol versioning for the C library and some other important system libraries, so those libraries should never change library version again and ABI compatibility will be preserved. Many third-party developers are not so careful, and will both break ABI without changing the library version, and change the library version without breaking the ABI, so you can't win. (Some developers don't read the documentation and think that the shared-library version number should be the same as the product's version number.)

Binary compatibility between Linux distributions

Sorry if this is an obvious question, but I've found surprisingly few references on the web ...
I'm working with an API written in C by one of our business partners and supplied to us as a .so binary file, built on Fedora 11. We've been testing out the API on a Fedora 11 development machine with no problems. However, when I try to link against the API on our customer's target platform, which happens to be SuSE Enterprise 10.2, I get a "File format not recognized" error.
Commands that are also part of the binutils package, such as objdump or nm, give me the same file format error. The "file" command shows me:
ELF 64-bit LSB shared object, AMD x86-64, version 1 (SYSV), not stripped
and the "ldd" command shows:
ldd: warning: you do not have execution permission for `./libuscuavactivity.so.1.1'
./libuscuavactivity.so.1.1: /usr/lib64/libstdc++.so.6: version `GLIBCXX_3.4.9' not found (required by ./libuscuavactivity.so.1.1)
[dependent library list]
I'm guessing this is due to incompatibility between the C libraries on the two platforms, with the problem being that the code was compiled against a new version of glibc etc. than the one available on SuSE 10.2. I'm posting this question on the off chance that there is a way to compile the code on our partner's Fedora 11 platform in such a way that it will run on SuSE 10.2 as well.
I think the trick is to build on a flavour of linux with the oldest kernel and C library versions of any of the platforms you wish to support. In my job we build on Debian 4, which allows us to officially support Debian 4 and above, RedHat 3,4,5, SuSE 10 plus various other distros (SELinux etc.) in an unofficial fashion.
I suspect by building on a nice new version of linux, it becomes difficult to support people on older machines.
(edit) I should mention that we use the default compiler that comes with Debian 4, which I think is GCC 4.1.2. Installing newer compiler versions tends to make compatibility much worse.
Windows has it problems with compatability between different realeases, service packs, installed SDKs, and DLLs in general (DLL Hell, anyone?). Linux is not immune to the same kinds of issues.
The compatability issues I have seen include:
Runtime library changes
Link library changes
Kernel changes
Compiler technology changes (eg: pre and post EGCS gcc versions. This might be your issue).
Packager issues (RPM vs. APT)
In your particular case, I'd have them do a "gcc -v" on their system and report to you the gcc version number. Compare that to what you are using.
You might have to get hold of that version of the compiler to build your half with.
You can use Linux Application Checker tool ([1], [2], [3]) in order to solve compatibility problems of an application between Linux distributions. It will check your file formats and all dependent libraries. It supports almost all popular Linux distributions including all versions of SuSE and Fedora.
This is just a personal opinion, but when distributing something in binary-only form on Linux, you have a few options:
Build the gamut of .debs and .rpms for every distro under the sun, with a nominal ".tar.gz full of binaries" package for anything you've missed. The first part is ideal but cumbersome. The latter part will lead you to point 2 and 3.
Do as some are suggesting and find the oldest distro you can find and build there. My own opinion is this is sort of a ridiculous idea. See point 3.
Distribute binaries, and statically link where ever you can. Especially for libstdc++, which appears to be your problem here. There are seemingly very many incompatible versions of libstdc++ floating around, which makes it a compatibility nightmare. If you can't link statically, you can also put *.so files alongside your binary, and use stuff like LD_PRELOAD or LD_LIBRARY_PATH to make them link preferentially at runtime. Note that if you take this route you may have to comply with LGPL etc. since you are now distributing other people's work alongside your project.
Of course, distributing your project in source form is always preferred on Linux. :-)
If the message is file format not recognized then the problem is most likely one mentioned by elmarco in a comment -- namely, different architecture. It might (I'm not sure) be a dynamic linker version mismatch, but that would mean the .so file was built with an ancient dynamic linker. I do not believe any incompatibility in libc could cause this -- they could cause link failures and runtime problems (latter very rarely), but not this.
I don't know about Suse, but I know fedora likes to stay on the bleeding edge. So you may very well be right about library versions. Why don't you ask and see if you can get the source code and build it on your Suse machine?

Resources