I have a problem with legacy 32-bit C application, built in Mac OS 10.4, with Xcode 2.2.0. It runs correctly on later (64-bit) systems (10.5 and later), but if I try to build it in later Xcode versions (2.2.1 - 3.2.6) on 10.5 and later, it's behavior changes, even though I set SDK 10.4 and GCC 4.0 everywhere (in "project settings" and "active target settings").
Details on changed behavior: when getting function addresses through CFBundleGetFunctionPointerForName() and then calling them, some return pointers. Then:
If application is compiled in Xcode 2.2.0 in Mac OS 10.4, and running in 10.5+, pointers could not exceed LONG_MAX, so signed long is sufficient for storing call result.
If application is compiled in Xcode 2.2.1+ in 10.5+ and running in 10.5+, pointers could be in range LONG_MAX..ULONG_MAX.
And interpreting result as signed long is essential, because functions, returning signed long (not pointers), are called exactly the same way. Changing it would be making a kludge, so please don't suggest splitting calls to functions that return pointers and functions that return result immediately - it would be done only if I cannot get a correct build.
This difference of builds' behavior I could explain only as calling in runtime some other bundles than an older build would call - and that is possible if 1) CFBundle library, which calls functions, is not the same as in real 10.4, or 2) system correctly determines SDK of an older build, and uses the correct "backward compatibility", but something in newer program settings is wrong, and so other bundle is called in runtime.
So I wonder, are there any "hidden" options in Xcode or its backends, that make a difference between real 10.4 build and build against 10.4 SDK in later OS?
Or how I can search for such differences?
Related
Is there a way to force GCC and/or Clang compiler to use LP64 data model when targeting Windows (ignoring that Windows use LLP64 data model)?
No, because the requested capability would not work
You are "targeting Windows", presumably meaning you want the compiler to produce code that will run under Windows in the usual way. In order to do that, the program must invoke functions in the Windows API. There are effectively three versions of the Windows API: win16, win32, and win64. Since you want 64-bit pointers (the "P64" in "LP64"), the only possible target is win64.
In order to call a win64 function, you must include windows.h. That header file uses long. If there were a compiler switch to insist that long be treated as a 64-bit integer (LP64) rather than 32-bit (LLP64), then the compiler's understanding of how to call functions and lay out data structures that use long would be wrong; the resulting program would not run correctly.
The same problem applies to the standard C and C++ libraries. If you link to an existing compiled library (as is typical), the calls into it won't work (since it will use LLP64). If you were to build one from source using a hypothetical switch to force LP64, its calls into the Windows API would fail.
But you can try Cygwin
Cygwin uses LP64 and produces binaries that run on Windows. That is possible, despite what I wrote above, because the Cygwin DLL acts as a bridge between the Cygwin LP64 environment and the native win64 LLP64 environment. Assuming you have code originally written for win32 that you now want to take advantage of a 64-bit address space with no or minimal code changes, I suspect this is the easiest path. But I should acknowledge that I've never used Cygwin in quite this way so there might be problems I am not aware of.
I am seeing undefined symbols when trying to link shared libraries with a program on Redhat Linux.
We are running Linux kernel 3.10.0, gcc 4.8.2 with libc-2.17.so, and libblkid 2.23.2
When I build the application I am writing I get two undefined symbols from libblkid: memcpy#GLIBC_2.14 and secure_getenv#GLIBC_2.17. (A very similar build works on other machines, ostensibly using the same versions of everything).
Note, for secure_getenv libblkid wants the same version as the libc library itself.
Looking at the symbols defined in libc-2.17.so I find memcpy##GLIBC_2.14, memcpy#GLIBC_2.2.5, secure_getenv, and secure_getenv#GLIBC_2.2.5. According to my understanding the double # in the first memcpy version is simply supposed to mark it as the default version. And, for some reason even in this libc with versioned symbols the first secure_getenv appears to be unversioned.
So, why does a requirement for memcpy#GLIBC_2.14 not match the defaulted memcpy##GLIBC_2.14?
And logically I would expect the base version of secure_getenv in libc-2.17 to match a requirement for version 2.17.
So, what is going on here? What is making it fail on my development machine and not others? How do I fix this? (As the make works on other machines this appears to be something specific to my build environment, but what?)
You probably have compat-glibc installed, as indicated by the -L/usr/lib/x86_64-redhat-linux6E/lib64 argument. compat-glibc on Red Hat Enterprise Linux 7 provides glibc 2.12 only, so it cannot be used to link against system libraries.
I'm building a library against the 5.0 sdk GCC and running the code on a 4.2.x device.
I'm not using 5.0 objective-c specific calls in this layer and the project is compatible to ship on 4.0+.
I'm seeing some behavior in my library that is really odd with my if blocks.
typically this should work
BYTE byteVal : 1;
byteVal = FALSE;
if (byteVal)
// ALWAYS RUNS
The problem being that the code in the if block is always executing.
This is causing me problems with zlib gzip functionality. Is the 4.2.x OS using some offset or different register alignment that isn't standard with building with the newer GCC?
I'm at a loss as to what is going on here and why this fails always on 4.2.x devices.
Any thoughts?
use
if (byteVal == 1)
There is some problem with using single bit wide member variables that the if (byteVal) is always true even when it is not.
Is there any compiler option to make time_t 64-bit in Solaris 5.8 in forte compiler. I need to develop library in 32-bit and I cannot change it to 64-bit as it effects existing client applications.
Sun does not (yet) provide any compiler option for this, other than compiling for 64-bit. If you simply need to be able to handle post-2038 dates in your 32-bit application (e.g. for a 30-year mortgage calculation) and do not need such dates in the Solaris kernel (e.g. current time, file timestamps), then there are packages that you can use within your application to handle such dates. For example y2038 is a simple package that provides a 64-bit time_t-like type and the corresponding replacements for localtime(), gmtime(), ctime(), etc. If you are not tied to the POSIX interfaces you could instead use something like libtai, which also handles leap seconds.
Short answer is no, there is no option in the compiler to make time_t a 64bit value in a 32bit application. It was extended to a 64bit value for 64bit applications as this seemed like a good change to make, but for compliance with all the various standards, it has to be kept as a signed 32bit value for 32bit applications.
If you want to use a 64bit value to represent time, then you will have to make sure that any values returned to the existing client applications do not overflow when returned. If they do overflow then you would need to support specifying this to the client application, and understand how they would deal with the values, which is all part of the API to the library.
Sorry if this is an obvious question, but I've found surprisingly few references on the web ...
I'm working with an API written in C by one of our business partners and supplied to us as a .so binary file, built on Fedora 11. We've been testing out the API on a Fedora 11 development machine with no problems. However, when I try to link against the API on our customer's target platform, which happens to be SuSE Enterprise 10.2, I get a "File format not recognized" error.
Commands that are also part of the binutils package, such as objdump or nm, give me the same file format error. The "file" command shows me:
ELF 64-bit LSB shared object, AMD x86-64, version 1 (SYSV), not stripped
and the "ldd" command shows:
ldd: warning: you do not have execution permission for `./libuscuavactivity.so.1.1'
./libuscuavactivity.so.1.1: /usr/lib64/libstdc++.so.6: version `GLIBCXX_3.4.9' not found (required by ./libuscuavactivity.so.1.1)
[dependent library list]
I'm guessing this is due to incompatibility between the C libraries on the two platforms, with the problem being that the code was compiled against a new version of glibc etc. than the one available on SuSE 10.2. I'm posting this question on the off chance that there is a way to compile the code on our partner's Fedora 11 platform in such a way that it will run on SuSE 10.2 as well.
I think the trick is to build on a flavour of linux with the oldest kernel and C library versions of any of the platforms you wish to support. In my job we build on Debian 4, which allows us to officially support Debian 4 and above, RedHat 3,4,5, SuSE 10 plus various other distros (SELinux etc.) in an unofficial fashion.
I suspect by building on a nice new version of linux, it becomes difficult to support people on older machines.
(edit) I should mention that we use the default compiler that comes with Debian 4, which I think is GCC 4.1.2. Installing newer compiler versions tends to make compatibility much worse.
Windows has it problems with compatability between different realeases, service packs, installed SDKs, and DLLs in general (DLL Hell, anyone?). Linux is not immune to the same kinds of issues.
The compatability issues I have seen include:
Runtime library changes
Link library changes
Kernel changes
Compiler technology changes (eg: pre and post EGCS gcc versions. This might be your issue).
Packager issues (RPM vs. APT)
In your particular case, I'd have them do a "gcc -v" on their system and report to you the gcc version number. Compare that to what you are using.
You might have to get hold of that version of the compiler to build your half with.
You can use Linux Application Checker tool ([1], [2], [3]) in order to solve compatibility problems of an application between Linux distributions. It will check your file formats and all dependent libraries. It supports almost all popular Linux distributions including all versions of SuSE and Fedora.
This is just a personal opinion, but when distributing something in binary-only form on Linux, you have a few options:
Build the gamut of .debs and .rpms for every distro under the sun, with a nominal ".tar.gz full of binaries" package for anything you've missed. The first part is ideal but cumbersome. The latter part will lead you to point 2 and 3.
Do as some are suggesting and find the oldest distro you can find and build there. My own opinion is this is sort of a ridiculous idea. See point 3.
Distribute binaries, and statically link where ever you can. Especially for libstdc++, which appears to be your problem here. There are seemingly very many incompatible versions of libstdc++ floating around, which makes it a compatibility nightmare. If you can't link statically, you can also put *.so files alongside your binary, and use stuff like LD_PRELOAD or LD_LIBRARY_PATH to make them link preferentially at runtime. Note that if you take this route you may have to comply with LGPL etc. since you are now distributing other people's work alongside your project.
Of course, distributing your project in source form is always preferred on Linux. :-)
If the message is file format not recognized then the problem is most likely one mentioned by elmarco in a comment -- namely, different architecture. It might (I'm not sure) be a dynamic linker version mismatch, but that would mean the .so file was built with an ancient dynamic linker. I do not believe any incompatibility in libc could cause this -- they could cause link failures and runtime problems (latter very rarely), but not this.
I don't know about Suse, but I know fedora likes to stay on the bleeding edge. So you may very well be right about library versions. Why don't you ask and see if you can get the source code and build it on your Suse machine?