Is the -mx32 GCC flag implemented (correctly)? - c

I am trying to build a program that communicates with a 32-bit embedded system, that runs on a Linux based x86_64 machine (host). On the host program I have a structure containing a few pointers that reflects an identical structure on the embedded system.
The problem is that on the host, pointers are natively 64-bits, so the offset of the structure members is not the same as in the embedded system. Thus, when copying the structure (as memcpy), the contents end up at the wrong place in the host copy.
struct {
float a;
float b;
float *p;
float *q;
} mailbox;
// sizeof(mailbox) is 4*4=16 on the embedded, but 2*4+2*8=24 on the host
Luckily, I found out here that gcc has an option -mx32 for generating 32-bit pointers on x86_64 machines. But, when trying to use this, I get an error saying:
$ gcc -mx32 test.c -o test.e
cc1: error: unrecognized command line option "-mx32"
This is for gcc versions 4.4.3 and 4.7.0 20120120 (experimental).
Why doesn't this option work? Is there a way around this?
EDIT: Accrding to the v4.4.7 manual, there was no -mx32 option available, and this is true up to v4.6.3. OTOH, v4.7.0 does show that option, so it may be that the Jan-20 version I am using is not the final one?!

Don't do this. First, x32 is a separate architecture. It's not merely a compiler switch. You need an x32 version of every library you link against to make this work. Linux distros aren't yet producing x32 versions, so that means you'll be either linking statically or rolling your own library environment.
More broadly: that's just asking for trouble. If your structure contains pointers they should be pointers. If it contains "32 bit addresses" they should be a 32 bit integer type.

You might need a newer version of binutils
Though I think gcc 4.8 is recommended
But in general you need a kernel compiled multilib with it: https://unix.stackexchange.com/questions/121424/linux-and-x32-abi-how-to-use

Related

Solaris 10 and 11 headers incompatible! What to do?

We have a product we ship on Solaris amd64 (and x86, SPARC) - we have single pkg that installs on Solaris 10 and 11.
We call some functions from /usr/include/bsm/audit.h, specifically getaudit_addr, and between Solaris 10 and 11 the ABI changed dramatically, for a start reordering the struct fields and changing their lengths:
struct auditinfo_addr {
au_mask_t ai_mask;
au_id_t ai_auid;
au_asid_t ai_asid;
au_tid_addr_t ai_termid;
}; /* Sol 11 version */
struct auditinfo_addr {
au_id_t ai_auid;
au_mask_t ai_mask;
au_tid_addr_t ai_termid;
au_asid_t ai_asid;
}; /* Sol 10 version */
So, our code uses dlopen/dlsym to get a handle to getaudit_addr, which unsurprisingly bails out in a horrible way if you compile on Sol10 and run on Sol11 (because we're using a completely mangled structure). This is not good.
Question
Would we be OK if we linked with -lbsm rather than used dlopen? If so, how, because I can't find any version of the Sol10 ABI symbols inside Sol11's libbsm.so using nm (and both Sol10 and Sol11's version of libbsm have the following version symbols: SUNW_0.7, SUNW_0.8, SUNW_1.1, SUNW_1.2). Update: No, linking with -lbsm on Solaris 10 doesn't make the code run correctly on Solaris 11. It's just a disgusting breaking ABI change they made. Grr.
If Solaris does have symbol versioning that works, can we do it dynamically?
I'm inclined to copy the structure definitions and do a run-time switch between the old and new ones rather than use the headers. Is there another fall-back solution?
Do the headers depend on the architecture? That is, is audit.h identical on SPARC, amd64, and x86? Obviously the size of typedef'd types may change, but will I need to hunt down a Solaris 11 SPARC machine to copy its header and check whether it matches the x86 one?
Really, if the ABI is incompatible you need to treat this in the same way as if it was an incompatible processor type. That is: Build two versions. Install or build the correct version from your installer. Check at runtime you are running the correct version, and quit if you are not.
Alternatively, if this is literally the only structure which changes, you could get away with a typedef'd auditinfo_addr_v10 vs auditinfo_addr_v11, and ship with either two versions of every function which uses the struct, or have a conversion function which is used on V10 to convert everything to the latest structure. I.e. supply your own getaudit_addr_wrapper which takes a v11 structure but will convert to the correct call on v10.

Is executable file generated after compiling in C can be copied and run on any differnet OS(UNIX)?

I am a java programmer, but i have few things to be done in C. So, i started with a simple example as below. If i have compiled it and generate a executable file (hello), can i run the executable file (hello) in any unix platform without the original file (hello.c)? And also is there a way to read the data from executable file means, decompile the executable file to original file (hello.c)?
[oracle#oracleapps test]$ cat hello.c
#include <stdio.h>
int main(){
int i,data =0;
for(i=1;i<=64;i+=1){
data = i*2;
printf("data=%d\n",data);
}
return 0;
}
To compile
gcc -Wall -W -Werror hello.c -o hello
You can run the resulting executable on platforms that are ABI-compatible with the one which you have compiled the executable for. ABI-compatibility basically means that the same physical processor architecture and OS-interfaces (plus calling convention) is used on two (possibly different) OSes. For example, you can run binaries compiled for Linux on a FreeBSD system (with the same processor type), because FreeBSD includes Linux ABI-compatibility. However, it may not be possible to run a binary on all other types of Unices, unless some hackery is done. For example, you can't run Mac OS X applications on linux, however this guy has a solution with which it's possible to use some OS X command line tools (including the GCC compiler itself) on Linux.
Reverse engineering: there are indeed decompilers which aim to generate C code from machine code, but they're not (yet) very powerful. The reason for this is they're by nature extremely hard to write. Machine code patterns have to be recognized, and even then you can't gather all the original info. For example, types of loops, comments and non-static local variable names and most of the types are all gone during the compilation process. For example, if you have a C source file like this:
int main(int argc, char **argv)
{
int i;
for (i = 0; i < 10; i++)
{
printf("I is: %d\n", i); /* Write the value of I */
}
return 0;
}
a C decompiler may be able to reconstruct the following code:
int main(int _var1, void *_var2)
{
int _var3 = 0;
while (_var3 < 10)
{
printf("I is: %d\n", _var3);
_var3 = _var3 + 1;
}
return 0;
}
But this would be a rather advanced decompiler, such as this one.
You can't run the executable on any platform.
You can run the executable on other machines (or this one) without the .c file. If it is the same OS / Distro running on the same hardware.
You can use a de-compiler to disassembler to read the file and view it as assembly or C-- they won't look much like the original c file.
The compiled file is pure machine code (plus some metadata), so it is self-sufficient in that it does not require the source files to be present. The downside? Machine code is both OS and platform-specific. By platform, we usually mean just roughly the CPU's instruction set, i.e. "x86" or "PowerPC", but some code compiled with certain compiler flags may require specific instruction set extensions. The OS dependence is caused not only by different formats for executable files (e.g. ELF as opposed to PE), but also by use of OS-specific services, or common OS services in an OS-specific manner (e.g. system calls). In addition to that, almost all nontrivial code depends on some libraries (a C runtime library at least), so you probably won't be able to run an executable without having the right libraries in compatible versions. So no your executable likely won't run on a 10 year old proprietary UNIX, and may not run on different Linux distributions (though with your program there's a good chance it does, because it likely only depends on glibc).
While machine code can be easily disassembled, the result is very low-level and useless to many people. Decompilation to C is almost always much harder, though there are attempts. The algorithms can be recovered, simply because they have to be encoded in the machine code somehow. Assuming you didn't compile for debugging, it will never recover comments, formatting, variable names, etc. so even a "perfect" decompiler would yield a different C file from the one you put in.
No ... each platform may have a different executable format requirements, different hardware architectures, different executable memory layouts determined by the linker, etc. A compiled executable is "native" to it's currently compiled platform, not other platforms. You can cross-compile for another architecture on your current machine though.
For instance, even though they may have many similarities, a compiled executable on Linux x86 is not guaranteed to run under BSD, depending on it's flavor (i.e., you could probably run it under FreeBSD but typically not OSX's Darwin version of BSD even thought both machines may have the same underlying hardware architecture). You also couldn't compile something on a SGI MIPS machine running IRIX and run it on a Sun SPARC running Solaris.
With C programs, the program is tied to the environment it was compiled for (which is usually the same as the platform it was compiled on, unless you are cross-compiling). You could copy something built for one version of Linux (and a particular hardware architecture) to another machine with the same archtecture running the same version of Linux, and you'll be fine. You can often get away with running it on a related version of Linux. But you won't get x86/64 code to run on a IA32 machine, nor on a PPC machine, nor on a SPARCmachine. You can likely get IA32 code to run on an x86/64 machine, if the basic O/S is sufficiently similar. And you may or may not be able to get something compiled for Debian to run under RedHat or vice versa; it depends on which libraries your program uses.
Java avoids this by having a platform-neutral byte code program that is compiled, and a platform specific JVM (JRE) to run it on each platform. This WORM (Write Once, Run Many) behaviour was a key selling point for Java.
Yes, you can run it on any unix qemu runs on. This is pretty comparable to java programs, which you can run on any unix the jvm runs on...

Atomic Operations in C on Linux

I am trying to port some code I wrote from Mac OS X to Linux and am struggling to find a suitable replacement for the OSX only OSAtomic.h. I found the gcc __sync* family, but I am not sure it will be compatible with the older compiler/kernel I have. I need the code to run on GCC v4.1.2 and kernel 2.6.18.
The particular operations I need are:
Increment
Decrement
Compare and Swap
What is weird is that running locate stdatomic.h on the linux machine finds the header file (in a c++ directory), whereas running the same command on my OSX machine (gcc v4.6.3) returns nothing. What do I have to install to get the stdatomic library, and will it work with gcc v 4.1.2?
As a side note, I can't use any third party libraries.
Well, nothing is there to stop you from using OSAtomic operations on other platforms. The sources for OSAtomic operations for ARM, x86 and PPC are a part of Apple's libc which is opensource. Just make sure you are not using OSSpinLock as that is specific to Mac OS X, but this can be easily replaced by Linux futexes.
See these:
http://opensource.apple.com/source/Libc/Libc-594.1.4/i386/sys/OSAtomic.s
http://opensource.apple.com/source/Libc/Libc-594.1.4/ppc/sys/OSAtomic.s
http://opensource.apple.com/source/Libc/Libc-594.1.4/arm/sys/OSAtomic.s
Alternatively, you can use the sync_* family, which I believe should work on most platforms, which I believe are described here: http://gcc.gnu.org/wiki/Atomic
The OpenPA project provides a portable library of atomic operations under an MIT-style license. This is one I have used before and it is pretty straightforward. The code for your operations would look like
#include "opa_primitives.h"
OPA_int_t my_atomic_int = OPA_INT_T_INITIALIZER(0);
/* increment */
OPA_incr_int(&my_atomic_int);
/* decrement */
OPA_decr_int(&my_atomic_int);
/* compare and swap */
old = OPA_cas_int(&my_atomic_int, expected, new);
It also contains fine-grained memory barriers (i.e. read, write, and read/write) instead of just a full memory fence.
The main header file has a comment showing the operations that are available in the library.
GCC atomic intrinsics have been available since GCC 4.0.1.
There is nothing stopping you building GCC 4.7 or Clang with GCC 4.1.2 and then getting all the newer features such as C11 atomics.
There are many locations you can find BSD licensed assembler implementations of atomics as a last resort.

Use 32bit shared library from 64bit application?

I have created a simple linux 32bit shared library(.so) for my rendering wrappers but i've hit a wall when i figured that i can only
use them through 32bit applications....................
This is how my code looks like:
RendIFace.h:
//Basic renderer interface
struct Renderer
{
int type;
...other things
};
GLRend.c:
#include "RendIFace.h"
struct Renderer* GLRendererCreate(int width,int height,int bytesPerPixel)
{
struct Renderer* rend = (struct Renderer*)malloc(sizeof(Renderer));
rend->type = GLR;
..other things
return rend;
}
SDLRend.c:
#include "RendIFace.h"
struct Renderer* SDLRendererCreate(int width,int height,int bytesPerPixel)
{
struct Renderer* rend = (struct Renderer*)malloc(sizeof(Renderer));
rend->type = SDLR;
..other things
return rend;
}
And i compile both as shared 32bit libraries(.so) and load them through the main application...
But now there is a big problem.My libraries are all 32bit and return 32bit pointers which means that i can't use them through
an 64bit application without rebuilding all the library code base(!!!).
So i would like to ask more experienced people : How do i handle this issue ? Is it possible to use just a single shared library for both architectures ???
You must be consistent. A 64-bit application can only use 64-bit libraries and a 32-bit application can only use 32-bit libraries. Both work; either choice is fine, and it's possible to compile the same code for both systems.
If you go for 'all 32-bit', use:
gcc -m32
If you go for 'all 64-bit', use:
gcc -m64
Sometimes, I'll tell make that the C compiler is gcc -m32 (or -m64) rather than just gcc to ensure the right value is used everywhere.
You can't do what you're asking. You must compile both the final executable and any libraries (both static and shared) for the same architecture.
On GCC, this can be done easily by passing the command line argument -m32 either directly in the command line or by adding it CCFLAGS in your Makefile.
While it is possible to run x86 code on a x86_64 operating system (you just need to have all the right libraries and their respective recursive dependencies), you cannot, in one executable or in one address space, combine x86 and x86_64 binaries.

Skipping incompatible error when linking

I am compiling on a 64 bit architecture with the intel C compiler. The same code built fine on a different 64 bit intel architecture.
Now when I try to build the binaries, I get a message "Skipping incompatible ../../libtime.a" or some such thing, that is indicating the libtime.a that I archived (from some object files I compiled) is not compatible. I googled and it seemed like this was usually the result of a 32->64 bit changeover or something like that, but the intel C compiler doesnt seem to support a -64 or some other memory option at compile time. How do I troubleshoot and fix this error?
You cannot mix 64-bit and 32-bit compiled code. Config instructions for Linux are here.
You need to determine the target processor of both the library and the new code you are building. This can be done in a few ways but the easiest is:
$ objdump -f ../../libtime.a otherfile.o
For libtime this will probably print out bunches of things, but they should all have the same target processor. Make sure that otherfile.o (which you should substitute one of your object files for) also has the same architecture.
gcc has the -m32 and -m64 flags for switching from the default target to a similar processor with the different register and memory width (commonly x86 and x86_64), which the Intel C compiler may also have.
If this has not been helpful then you should include the commands (with all flags) used to compile everything and also information about the systems that each command was being run on.

Resources