How can I compile a Linux executable for a different machine? - c

I've written a Linux program in C, and I'm trying to get it to run on a server system. It looks like everything should work, but when I try it, I get this:
/lib64/libc.so.6: version `GLIBC_2.14' not found (required by <program>)
/lib64/libc.so.6: version `GLIBC_2.14' not found (required by ./libdbi.so.1)
(Where <program> is my program's name.)
So far as I can tell, my program only requires that version of GLIBC because libdbi does. I've tried compiling libdbi from source, and it still attempts to link to that version of GLIBC.
I don't own the server system (it's a shared system I run a website on, and have SSH access to), so I can't make any changes to it -- that's why the library file is in the same directory, and I've set LD_LIBRARY_PATH=.. Unfortunately I also don't have access to a compiler on it -- when I try to run GCC, I'm told "permission denied". It's run by a big corporation, and I'm only one customer; the chances of them making any changes at my request are essentially zero.
Is there any way to compile the program on my system so that it will work on the server?
Before I asked, I found these similar questions:
Compile C program in Linux with different glibc library: the link in the answer goes to a 404 page, and from what I've been able to determine, apgcc isn't available on Debian distributions.
Relink a shared library to a different version of libc: seems to say that this problem doesn't exist, because "glibc tend to be backwards compatible" (except they apparently aren't in this case).
How to compile Linux C program to run on another Linux machine?: suggests a chroot or virtual machine, which I've done before elsewhere, but how can I tell it to use a libc without that old GLIBC version?
is binary executable file portable: suggests static-linking, but libdbi dynamically-links to its driver files, so that apparently can't be done -- I get several errors referring to missing functions like ldopen.
There are others, but they seem to be variations on those.
I'd be willing to use a non-free solution (like one that I saw in another answer I can't find now) if I turn this into a commercial product, but for a single use it seems like massive overkill, not to mention the expense.
Is there any way to simply tell libdbi to link to a later GLIBC version, maybe? If not, is there any solution I've overlooked?

Big corporation or not, the least they owe you if you are paying for service in any way or being paid for development to meet a requirement is a careful description of the runtime environment so you can duplicate it on a development machine.
Then you must set out to systematically duplicate this environment. Since you're using libdbi you should be thorough. Database connections can exercise big chunks of the system API, so you want to have exactly the same version of Linux, gcc (even if you can't run it, you need to know the version other parts of the system were compiled with), and other tools and libraries. If you don't, you won't be able to have much confidence that your development machine tests translate to good behavior on the target.
A virtual machine is a good way to create a specialized development environment without messing up your existing one.

You must compile it on a machine that has the same version of glibc as the target machine, or an older version. shared library compatibility works in that direction only.

Find out what version of Linux the server uses, get a copy of it and install it in a VM
Virtualbox is good for this
You can use this environment for testing code as well as this particular compilation problem

You have the following options:
Compile your code on the server machine (which likely has gcc installed)
Compile your program with statically linked libraries (option -static for gcc)

Related

GLIBC version not found when executing AppImage on different distro

I'm currently working on an application that I would like to publish to many distributions. So far, I've done all my testing on one distribution at a time (compile and run on the same distro). But when I take the outputted AppImage from compilation on my main computer (Arch Linux), and try to run it in a vm (Ubuntu 20.04), it gives me the error below:
gabriel#gabriel-VirtualBox:~/Downloads$ ./Neptune.Installer-x86_64.AppImage ./Neptune.Installer-x86_64.AppImage: /lib/x86_64-linux-gnu/libc.so.6: version `GLIBC_2.34' not found (required by ./Neptune.Installer-x86_64.AppImage)
What possible solutions are there to this? I've considered statically linking the library, but I'm unsure if that might cause licensing issues, as my program is not open source. Apart from that, I might consider simply compiling my program on a very old distribution such as Ubuntu 12 or something, but I won't know how well that carries over to other distros (for example, will my program still work on an old version of Fedora?)
This might be a complicated question but I just want to know what the best way to solve this issue is. Change libraries? Statically link? Compile on old distributions? Let me know what you think.
I've considered statically linking the library, but I'm unsure if that might cause licensing issues,
Yes.
very old distribution such as Ubuntu 12 or something, but I won't know how well that carries over to other distros
It doesn't (alpine linux). If you compile software, you have to run it with the set of libraries you compiled it against. Even if you compile on "very old distributions" there may be changes.
publish to many distributions
Choose the list of distributions and versions of these distributions you want to support. Tell users that you will support these distribution versions. (https://partner.steamgames.com/doc/store/application/platforms -> Steam only officially supports Ubuntu running Ubuntu 12.04 LTS or newer..).
Compile against every combination of distribution+version separately, and distribute your software separately for every such distribution version. For users convenience, create and share package repositories for specific distribution package manager with your software. On https://www.zabbix.com/download there are only so many combinations to choose from. Interest yourself in CI/CD in docker virtualized environments. I like gitlab.
Or alternatively distribute your application with all dependent shared libraries. Bundle it all together with the operating system and distribute in a form of a docker image or a qemu/virtualbox virtual image. Or distribute with just shared libraries files with a wrapper around LD_PRELOAD. Just like steam does. Install steam on your system, and see what happens in ~/.steam/steam/ubuntu12_64.
And hire a layer to solve the licensing issues.

using Linux instead of UNIX to compile c code for CS course

A CS course I'm taking online suggests students compile their source code and run tools like valgrind on the OS UNIX. I'm completely new to UNIX, Linux, their tools, and coding in c. I've made some attempts at installing FreeBSD 8.1 on VMWare Player 3.1.3, and even managed to get VMWare Tools running. But the FreeBSD documentation has led me down many dead-ends in accomplishing common tasks i.e. mounting an NFS or USB device. It turns out that the packages I need to make this happen aren't installed or configured, and I don't see any straight answer on how to install them.
So, if I'm using UNIX only as a tool to run gcc, g++, valgrind for this CS course, and these can be run on Linux instead, it seems like I can get the job done faster using Ubuntu Linux.
Can Linux be used to compile and run c code identically on UNIX, if compiled on Linux? Or if not, what are the differences to look for?
Thanks
For the novice-level C programmer such as OP, the difference of environment is negligible. Go ahead with Linux.
I think for purposes of the course you could run your programs and tools on Linux,
but I guess the reason your teacher wants you to use FreeBSD is so that you learn other things besides just coding up your problems
The two should be effectively the same. The only major difference you might see would be due to different versions being used. I would check to see what versions of gcc, g++ and valgrind the teacher is having you use, and make sure that you have the same version running on your install of Linux.
You can also use MinGW or Cygwin. You mentioned VMWare, so I'm guessing you're trying to just get an environment up and running in a windows environment. They both allow you to use the compiler and some of the tools without a full install of a Linux based system. In a CS course they would be more than enough.
The main differences too look for:
Compiling C / C++ is not machine independent. You need to have a small environment to compile on UNIX anyway if you need to submit compiled programs to your professor.
C / C++ is rather portable if you don't use anything that's non-portable. It's very hard to verify that you didn't use something that's different between the two machines, so you may wish to compile on UNIX to verify you didn't let an unavailable library (or an specific to the OS procedure, argument, behavior, bugs, etc.) slip into your code.
The vendor of make between the two machines may differ. This means that while the core of make will operate similarly, certain features might not be available in both. In reality, you probably won't use most of makes extended features, but in a worst case scenario you might opt to maintain multiple Makefiles or limit yourself to a common subset of features.
At the end of the day, it all boils down to what your professor will want. Odds are 95+% that you can do 100% of the work in Linux, but the prof's requirements or grading environment might be such that you will have to copy your code into a UNIX account to build the final "submission" executable. Considering that university UNIX accounts aren't nearly as portable as Linux on a laptop, the cost of the "final verification / porting" to the University computer is likely to be small compared to the convenience of working on your homework more hours than you can manage in a fixed lab.

How to create a working Executable file (.exe) from a C code

I have a C code created in Plato3. I want to create an exe file so I can share it with others.
Can someone please tell me how is this possible ?
I have tried sending the exe file that is created when normally compiled, but it crashes every time in runs on computers other than mine ...
Please help,
Thanks :)
[EDIT]
Program running on windows xp or vista .. same error :
Compiler used : SilverFrost (Fortran/C/C++) Development Studio (Plato3)
This application has failed to start
because salflibc.dll was not found,
reinstalling the application may fix
this problem
salflibc.dll is a library installed by the compiler on your development machine.
salf = Salford C Compiler, the obscure compiler included in Silverfrost
libc = C-language runtime support library, necessary for the basic functionality of any program
.dll = dynamically-linked library, i.e. a separate file from your .exe file
You might look for a compiler option that looks like "statically link runtime library;" this might eliminate the DLL dependency. However, if the compiler were capable of doing that, one would expect it to be the default, if not the only way.
However, I recall from the olden days of Classic Mac OS that sometimes DLL runtime libraries were used, the benefit being upgradability. Sometimes is a key word, though. (I suppose when the compiler vendor is the OS vendor, as with MSVC or Apple GCC, it is the norm, though.)
Another trick from that environment was to put the DLL in question in the application's directory and distribute it with the app. Typically runtime DLLs are licensed for free redistribution.
At the very least you have to make sure that the executable is running on the same architecture/operating system that it was compiled on.
Additionally, you need to make sure that any third party, or system libraries that are needed are available on the other systems too.
update
Based on the new information and error message you provide, it looks like you need to re-distribute the salflibc.dll
I would agree with other commenter's and suggest a different platform for development that is more mainstream, or supported.

Binary compatibility between Linux distributions

Sorry if this is an obvious question, but I've found surprisingly few references on the web ...
I'm working with an API written in C by one of our business partners and supplied to us as a .so binary file, built on Fedora 11. We've been testing out the API on a Fedora 11 development machine with no problems. However, when I try to link against the API on our customer's target platform, which happens to be SuSE Enterprise 10.2, I get a "File format not recognized" error.
Commands that are also part of the binutils package, such as objdump or nm, give me the same file format error. The "file" command shows me:
ELF 64-bit LSB shared object, AMD x86-64, version 1 (SYSV), not stripped
and the "ldd" command shows:
ldd: warning: you do not have execution permission for `./libuscuavactivity.so.1.1'
./libuscuavactivity.so.1.1: /usr/lib64/libstdc++.so.6: version `GLIBCXX_3.4.9' not found (required by ./libuscuavactivity.so.1.1)
[dependent library list]
I'm guessing this is due to incompatibility between the C libraries on the two platforms, with the problem being that the code was compiled against a new version of glibc etc. than the one available on SuSE 10.2. I'm posting this question on the off chance that there is a way to compile the code on our partner's Fedora 11 platform in such a way that it will run on SuSE 10.2 as well.
I think the trick is to build on a flavour of linux with the oldest kernel and C library versions of any of the platforms you wish to support. In my job we build on Debian 4, which allows us to officially support Debian 4 and above, RedHat 3,4,5, SuSE 10 plus various other distros (SELinux etc.) in an unofficial fashion.
I suspect by building on a nice new version of linux, it becomes difficult to support people on older machines.
(edit) I should mention that we use the default compiler that comes with Debian 4, which I think is GCC 4.1.2. Installing newer compiler versions tends to make compatibility much worse.
Windows has it problems with compatability between different realeases, service packs, installed SDKs, and DLLs in general (DLL Hell, anyone?). Linux is not immune to the same kinds of issues.
The compatability issues I have seen include:
Runtime library changes
Link library changes
Kernel changes
Compiler technology changes (eg: pre and post EGCS gcc versions. This might be your issue).
Packager issues (RPM vs. APT)
In your particular case, I'd have them do a "gcc -v" on their system and report to you the gcc version number. Compare that to what you are using.
You might have to get hold of that version of the compiler to build your half with.
You can use Linux Application Checker tool ([1], [2], [3]) in order to solve compatibility problems of an application between Linux distributions. It will check your file formats and all dependent libraries. It supports almost all popular Linux distributions including all versions of SuSE and Fedora.
This is just a personal opinion, but when distributing something in binary-only form on Linux, you have a few options:
Build the gamut of .debs and .rpms for every distro under the sun, with a nominal ".tar.gz full of binaries" package for anything you've missed. The first part is ideal but cumbersome. The latter part will lead you to point 2 and 3.
Do as some are suggesting and find the oldest distro you can find and build there. My own opinion is this is sort of a ridiculous idea. See point 3.
Distribute binaries, and statically link where ever you can. Especially for libstdc++, which appears to be your problem here. There are seemingly very many incompatible versions of libstdc++ floating around, which makes it a compatibility nightmare. If you can't link statically, you can also put *.so files alongside your binary, and use stuff like LD_PRELOAD or LD_LIBRARY_PATH to make them link preferentially at runtime. Note that if you take this route you may have to comply with LGPL etc. since you are now distributing other people's work alongside your project.
Of course, distributing your project in source form is always preferred on Linux. :-)
If the message is file format not recognized then the problem is most likely one mentioned by elmarco in a comment -- namely, different architecture. It might (I'm not sure) be a dynamic linker version mismatch, but that would mean the .so file was built with an ancient dynamic linker. I do not believe any incompatibility in libc could cause this -- they could cause link failures and runtime problems (latter very rarely), but not this.
I don't know about Suse, but I know fedora likes to stay on the bleeding edge. So you may very well be right about library versions. Why don't you ask and see if you can get the source code and build it on your Suse machine?

How do I cross-compile C code on Windows for a binary to also be run on Unix (Solaris/HPUX/Linux)?

I been looking into Cygwin/Mingw/lcc and I liked to be able to compile perl native C extensions on my windows(preferably under cygwin) and then run them on Solaris and HP unix without any further fuss, is this possible?
This all stems from my original perl cross-platform question here.
(This is a very old question, but missing some useful info --
I've personally done this for Solaris (SPARC & x86), AIX, HP-UX and Linux (x86, x64).)
Getting C++ cross-compiled is much harder than straight C.
HP-UX 32-bit PA-RISC is not supported because it uses SOM format instead of ELF and binutils doesn't (and likely won't ever) support SOM. In other words, you can only cross-compile 64-bit PA-RISC. (Requires PA-RISC 2.0 chip.)
I would go with mingw instead of cygwin, if you can. Cygwin introduces a lot of file permission headaches and cygwin1.dll dependencies that can be troublesome. If possible, however, build on linux. Everything will be much faster because all the tools and scripts you're running are designed for an environment where exec and stat are fast operations. Windows + NTFS is not that environment.
Start with the crosstools script, but be prepared to spend a lot of time on this.
Try with the very latest gcc/binutuils first, but if you can't overcome problems try dropping back to older packages. E.g. for Power3 (AIX) gcc 4.x series cross compiler generates bad code, 3.x is fine.
When copying native libs and headers make sure you are copying from the oldest machine you're likely to run on. Copying a new libc means your code won't run on any machine with an older libc.
When copying native libs and headers you probably want 'tar -h' to turn symlinks into actual files, also watch that on Solaris some requisite crt object files are buried in a cc directory, not under /usr/lib
Cross-compiler are very hard to setup and get working correctly.
Consider that (the people at) NetBSD have to put in a huge amount of work to get cross-compiling to work, and they're running the same OS, just different architectures.
You'd have to, at least, copy all the headers from the other OSs to Windows, and get a cross-compiler, linker etc for the target OS/architecture.
Also that may well not be possible - perl and shared libraries may be compiled with a native/non-gcc compiler which won't be available on Windows at all.
I agree with Douglas, that getting a cross compiler up and working is very hard to do. This is generally, your choice of last resort. If you are boot strapping, or making a binary for an embedded device, then often cross-compiling is your only option. You should be comfortable compiling your own gcc under Cygwin before considering cross compiling. To cross compile, you need to build a gcc to run under windows, but which will create binaries for your execution platform. Sample instructions for doing this can be found here.
Perhaps you are wanting to cross compile because you don't have root and/or can't compile on your target platform. For example, I had a hosting provider which ran Redhat Linux. I could run Perl CGI scripts, and associated modules, but I could not compile on the target machine, and an libraries I built had to exist in my own directory.
To solve this, I could have attempted to cross compile for my target platform, but instead, I decided to setup a similar host inside a VM on Windows. From within Cygwin, you can create a script which ssh's into your VM, copies your source, and does a full configure/build. The last step was to deploy the binary artifact onto my hosted system.
I've successfully had both Solaris 10 and Open Solaris running within a VM on Windows. Unfortunately, you might have a harder time running HPUX under a VM.
Why don't you have a read up on "Grand Unified Builder" (http://lilypond.org/gub/ and http://valentin.villenave.info/The-LilyPond-Report-11 (section #4))
I don't know how it works, but GUB allows the Lilypond developers to compile for about 11 platforms on a linux box.
Compile on Windows then use Wine to run them on any *nix. It works well most of the time.
No, this isn't possible at the binary level. There are so many differences at binary level between the various OSes and CPUs.
But what you can do is make the your C extensions source compatible so that it can compile to different platforms. C was designed as a "portable assembly language". As long as you stick with routines that are cross-platform, then they will usually work the same. You'll still need to test because there could be bugs that exists on particular platform.
This can't be done ... but is it that much of a hassle to recompile the code under Solaris or HP?

Resources