I've downloaded a Linux C SDK that comes with a bunch of static and dynamic libraries. The Readme has this to say:
This SDK was compiled with the gcc version 4.5.1 .
You SHOULD NOT mix this SDK binaries with other gcc versions, because
your application will end up in loading two different libcs, which
results in two different heaps. Mixing heaps will lead to application
crashes, when trying to free memory that was allocated by another
heap.
I've never heard of anything like this and a search on the web hasn't turned up any confirmation for it. What I did find, was something about the ABI, but as I understand it, this just means, that the libraries might be incompatible to my GCC version in the sense that they don't run at all. This has nothing to do with libc versions or the heap.
So, is it true what the Readme says? Or, in more general terms: Should I never try to use libraries I downloaded off the internet with another GCC version than the one they were compiled with?
What if I want to use several libraries that were compiled with different GCC versions?
Thanks everyone,
Moritz
It might be due to the optimisations used to build the SDK. Highly optimised binary has more chances of crashing if mixed.
Related
It seems I've got a real problem here due to my lack of any knowledge about Linux systems:
I have downloaded some open source code, which
is written in C
uses complex.h, so I assume it is ANSI C99
comes with makefiles designed for compilation under Linux systems
provides interfaces to IDL, MATLAB, Python etc.
I am indeed familiar about compiling C/MEX files under Windows-based MATLAB environments, but in this case I don't even know where to start. The project is distributed in several folders and consists of dozens of source and header files. And, to begin with, the Visual Studio 2010 compiler I've used to compile MEX files until now does not comply with the C99 standard, i.e. it does not recognize the complex.h header.
Any help towards getting this project compiled would be highly appreciated. In particular, I have the following questions:
1) Is there any possibility to automatically extract compilation information from the MEX files and transfer it to Windows reality?
2) Is there any free compiler being able to compile C99 stuff, which is also easy to embed in MATLAB?
I have done this (moved in-house legacy code inc. mex files to Win64). I can't recommend the experience.
You will have to recompile, no way around it.
Supported compilers for mex depend on your MATLAB version
This File Exchange entry for using Pelles C may be a starting point (if it works with your version of MATLAB).
I am guessing that there is a main makefile which then works through the makefiles in the subdirectories - have a read through the instructions for compiling under Linux, it will give you some idea of what's going on and may also discuss what to do if you want to change compiler. Once you've found a compatible compiler, the next stage is to understand what the makefiles are doing and edit them accordingly (change paths, compiler, compiler flags, etc.)
Then, from memory (it was a while ago), you get to enjoy a magical mystery tour through increasingly obscure compiler errors. Document everything because if you do get it working, you won't be in a mood to do this twice.
MATLAB R2016b on Windows now supports the MinGW compiler. I'm successfully using this to compile code written primarily for Linux/gcc. I installed this from the Add-On menu in MATLAB (search MinGW).
For my case, I'm building with the legacy code tool. The only thing I needed to do differently than normal was to tell the compiler to support c99 via a compiler flag. This does the trick:
legacy_code('compile', def, {'CFLAGS=-std=c99'})
I had trouble getting the flag command just right (I had some extra quotes that apparently broke things), and asked The MathWorks, so credit is due to their support team for this.
If you are using mex, I would expect to do something very similar.
I would guess that the makefiles are irrelevant for your application; you will need to tell the mex or legacy_code function about all of the files necessary to build the whole application or link against pre-built libraries (which it sounds like you don't have).
I hope this helps!
I've written a Linux program in C, and I'm trying to get it to run on a server system. It looks like everything should work, but when I try it, I get this:
/lib64/libc.so.6: version `GLIBC_2.14' not found (required by <program>)
/lib64/libc.so.6: version `GLIBC_2.14' not found (required by ./libdbi.so.1)
(Where <program> is my program's name.)
So far as I can tell, my program only requires that version of GLIBC because libdbi does. I've tried compiling libdbi from source, and it still attempts to link to that version of GLIBC.
I don't own the server system (it's a shared system I run a website on, and have SSH access to), so I can't make any changes to it -- that's why the library file is in the same directory, and I've set LD_LIBRARY_PATH=.. Unfortunately I also don't have access to a compiler on it -- when I try to run GCC, I'm told "permission denied". It's run by a big corporation, and I'm only one customer; the chances of them making any changes at my request are essentially zero.
Is there any way to compile the program on my system so that it will work on the server?
Before I asked, I found these similar questions:
Compile C program in Linux with different glibc library: the link in the answer goes to a 404 page, and from what I've been able to determine, apgcc isn't available on Debian distributions.
Relink a shared library to a different version of libc: seems to say that this problem doesn't exist, because "glibc tend to be backwards compatible" (except they apparently aren't in this case).
How to compile Linux C program to run on another Linux machine?: suggests a chroot or virtual machine, which I've done before elsewhere, but how can I tell it to use a libc without that old GLIBC version?
is binary executable file portable: suggests static-linking, but libdbi dynamically-links to its driver files, so that apparently can't be done -- I get several errors referring to missing functions like ldopen.
There are others, but they seem to be variations on those.
I'd be willing to use a non-free solution (like one that I saw in another answer I can't find now) if I turn this into a commercial product, but for a single use it seems like massive overkill, not to mention the expense.
Is there any way to simply tell libdbi to link to a later GLIBC version, maybe? If not, is there any solution I've overlooked?
Big corporation or not, the least they owe you if you are paying for service in any way or being paid for development to meet a requirement is a careful description of the runtime environment so you can duplicate it on a development machine.
Then you must set out to systematically duplicate this environment. Since you're using libdbi you should be thorough. Database connections can exercise big chunks of the system API, so you want to have exactly the same version of Linux, gcc (even if you can't run it, you need to know the version other parts of the system were compiled with), and other tools and libraries. If you don't, you won't be able to have much confidence that your development machine tests translate to good behavior on the target.
A virtual machine is a good way to create a specialized development environment without messing up your existing one.
You must compile it on a machine that has the same version of glibc as the target machine, or an older version. shared library compatibility works in that direction only.
Find out what version of Linux the server uses, get a copy of it and install it in a VM
Virtualbox is good for this
You can use this environment for testing code as well as this particular compilation problem
You have the following options:
Compile your code on the server machine (which likely has gcc installed)
Compile your program with statically linked libraries (option -static for gcc)
I'm an avid Python user and it seems that I require MinGW to be installed on my Windows machine to compile some libraries. I'm a little confused about MinGW and GCC. Here's my question (from a real dummy point of view):
So Python is language which both interpreted and compiled. There are Linux and Windows implementations of Python which one simply installs and used the binary to a execute his code. They come bundled with a bunch of built-in libraries that you can use. It's the same with Ruby from what I've read.
Now, I've done a tiny bit a of C and I know that one has a to compile it. It has its built-in libraries which seem to be called header files which you can use. Now, back in the school day's, C, was writing code in a vi-like IDE called Turbo-C and then hitting F9 to compile it. That's pretty much where my C education ends.
What is MinGW and what is GCC? I've been mainly working on Windows systems and have even recently begun using Cygwin. Aren't they the same?
A simple explanation hitting these areas would be helpful.
(My apologies if this post sounds silly/stupid. I thought I'd ask here. Ignoring these core bits never made anyone a better programmer.)
Thanks everyone.
MinGW is a complete GCC toolchain (including half a dozen frontends, such as C, C++, Ada, Go, and whatnot) for the Windows platform which compiles for and links to the Windows OS component C Runtime Library in msvcrt.dll. Rather it tries to be minimal (hence the name).
This means, unlike Cygwin, MinGW does not attempt to offer a complete POSIX layer on top of Windows, but on the other hand it does not require you to link with a special compatibility library.
It therefore also does not have any GPL-license implications for the programs you write (notable exception: profiling libraries, but you will not normally distribute those so that does not matter).
The newer MinGW-w64 comes with a roughly 99% complete Windows API binding (excluding ATL and such) including x64 support and experimental ARM implementations. You may occasionally find some exotic constant undefined, but for what 99% of the people use 99% of the time, it just works perfectly well.
You can also use the bigger part of what's in POSIX, as long as it is implemented in some form under Windows. The one major POSIX thing that does not work with MinGW is fork, simply because there is no such thing under Windows (Cygwin goes through a lot of pain to implement it).
There are a few other minor things, but all in all, most things kind of work anyway.
So, in a very very simplified sentence: MinGW(-w64) is a "no-frills compiler thingie" that lets you write native binary executables for Windows, not only in C and C++, but also other languages.
To compile C program you need a C implementation for your specific computer.
C implementations consist, basically, of a compiler (its preprocesser and headers) and a library (the ready-made executable code).
On a computer with Windows installed, the library that contains most ready-made executable code is not compatible with gcc compiler ... so to use this compiler in Windows you need a different library: that's where MinGW enters. MinGW provides, among other things, the library(ies) needed for making a C implementation together with gcc.
The Windows library and MSVC together make a different implementation.
MinGW is a suite of development tools that contains GCC (among others), and GCC is a C compiler within that suite.
MinGW is an implementation of most of the GNU building utilities, like gcc and make on windows, while gcc is only the compiler. Cygwin is a lot bigger and sophisticated package, wich installs a lot more than MinGW.
The only reason for existence of MinGW is to provide linux-like environment for developers not capable of using native windows tools. It is inferior in almost every respect to Microsoft tooolchains on Win32/Win64 platforms, BUT it provides environment where linux developer does not have to learn anything new AND he/she can compile linux code almost without modifications. It is a questionable approach , but many people find that convenience more important than other aspects of the development .
It has nothing to do with C or C++ as was indicated in earlier answers, it has everything to do with the environment developer wants. Argument about GNU toolchains on windows and its nessessety, is just that - an argument
GCC - unix/linux compiler,
MinGW - approximation of GCC on Windows environment,
Microsoft compiler and Intel compiler - more of the same as names suggest(both produce much , much better programs on Windows then MinGW, btw)
Sorry if this is an obvious question, but I've found surprisingly few references on the web ...
I'm working with an API written in C by one of our business partners and supplied to us as a .so binary file, built on Fedora 11. We've been testing out the API on a Fedora 11 development machine with no problems. However, when I try to link against the API on our customer's target platform, which happens to be SuSE Enterprise 10.2, I get a "File format not recognized" error.
Commands that are also part of the binutils package, such as objdump or nm, give me the same file format error. The "file" command shows me:
ELF 64-bit LSB shared object, AMD x86-64, version 1 (SYSV), not stripped
and the "ldd" command shows:
ldd: warning: you do not have execution permission for `./libuscuavactivity.so.1.1'
./libuscuavactivity.so.1.1: /usr/lib64/libstdc++.so.6: version `GLIBCXX_3.4.9' not found (required by ./libuscuavactivity.so.1.1)
[dependent library list]
I'm guessing this is due to incompatibility between the C libraries on the two platforms, with the problem being that the code was compiled against a new version of glibc etc. than the one available on SuSE 10.2. I'm posting this question on the off chance that there is a way to compile the code on our partner's Fedora 11 platform in such a way that it will run on SuSE 10.2 as well.
I think the trick is to build on a flavour of linux with the oldest kernel and C library versions of any of the platforms you wish to support. In my job we build on Debian 4, which allows us to officially support Debian 4 and above, RedHat 3,4,5, SuSE 10 plus various other distros (SELinux etc.) in an unofficial fashion.
I suspect by building on a nice new version of linux, it becomes difficult to support people on older machines.
(edit) I should mention that we use the default compiler that comes with Debian 4, which I think is GCC 4.1.2. Installing newer compiler versions tends to make compatibility much worse.
Windows has it problems with compatability between different realeases, service packs, installed SDKs, and DLLs in general (DLL Hell, anyone?). Linux is not immune to the same kinds of issues.
The compatability issues I have seen include:
Runtime library changes
Link library changes
Kernel changes
Compiler technology changes (eg: pre and post EGCS gcc versions. This might be your issue).
Packager issues (RPM vs. APT)
In your particular case, I'd have them do a "gcc -v" on their system and report to you the gcc version number. Compare that to what you are using.
You might have to get hold of that version of the compiler to build your half with.
You can use Linux Application Checker tool ([1], [2], [3]) in order to solve compatibility problems of an application between Linux distributions. It will check your file formats and all dependent libraries. It supports almost all popular Linux distributions including all versions of SuSE and Fedora.
This is just a personal opinion, but when distributing something in binary-only form on Linux, you have a few options:
Build the gamut of .debs and .rpms for every distro under the sun, with a nominal ".tar.gz full of binaries" package for anything you've missed. The first part is ideal but cumbersome. The latter part will lead you to point 2 and 3.
Do as some are suggesting and find the oldest distro you can find and build there. My own opinion is this is sort of a ridiculous idea. See point 3.
Distribute binaries, and statically link where ever you can. Especially for libstdc++, which appears to be your problem here. There are seemingly very many incompatible versions of libstdc++ floating around, which makes it a compatibility nightmare. If you can't link statically, you can also put *.so files alongside your binary, and use stuff like LD_PRELOAD or LD_LIBRARY_PATH to make them link preferentially at runtime. Note that if you take this route you may have to comply with LGPL etc. since you are now distributing other people's work alongside your project.
Of course, distributing your project in source form is always preferred on Linux. :-)
If the message is file format not recognized then the problem is most likely one mentioned by elmarco in a comment -- namely, different architecture. It might (I'm not sure) be a dynamic linker version mismatch, but that would mean the .so file was built with an ancient dynamic linker. I do not believe any incompatibility in libc could cause this -- they could cause link failures and runtime problems (latter very rarely), but not this.
I don't know about Suse, but I know fedora likes to stay on the bleeding edge. So you may very well be right about library versions. Why don't you ask and see if you can get the source code and build it on your Suse machine?
The product-group I work for is currently using gcc 3.4.6 (we know it is ancient) for a large low-level c-code base, and want to upgrade to a later version. We have seen performance benefits testing different versions of gcc 4.x on all hardware platforms we tested it on. We are however very scared of c-compiler bugs (for a good reason historically), and wonder if anyone has insight to which version we should upgrade to.
Are people using 4.3.2 for large code-bases and feel that it works fine?
The best quality control for gcc is the linux kernel. GCC is the compiler of choice for basically all major open source C/C++ programs. A released GCC, especially one like 4.3.X, which is in major linux distros, should be pretty good.
GCC 4.3 also has better support for optimizations on newer cpus.
When I migrated a project from GCC 3 to GCC 4 I ran several tests to ensure that behavior was the same before and after. Can you just run a run a set of (hopefully automated) tests to confirm the correct behavior? After all, you want the "correct" behavior, not necessarily the GCC 3 behavior.
I don't have a specific version for you, but why not have a 4.X and 3.4.6 installed? Then you could try and keep the code compiling on both versions, and if you run across a show-stopping bug in 4, you have an exit policy.
Use the latest one, but hunt down and understand each and every warning -Wall gives. For extra fun, there are more warning flags to frob. You do have an extensive suite of regression (and other) tests, run them all and check them.
GCC (particularly C++, but also C) has changed quite a bit. It does much better code analysis and optimization, and does handle code that turns out to invoke undefined bahaviiour differently. So code that "worked fine" but really did rely on some particular interpretation of invalid constructions will probably break. Hopefully making the compiler emit a warning or error, but no guarantee of such luck.
If you are interested in OpenMP then you will need to move to gcc 4.2 or greater. We are using 4.2.2 on a code base of around 5M lines and are not having any problems with it.
I can't say anything about 4.3.2, but my laptop is a Gentoo Linux system built with GCC 4.3.{0,1} (depending on when each package was built), and I haven't seen any problems. This is mostly just standard desktop use, though. If you have any weird code, your mileage may vary.