How can I cross-compile C code for a Cyrix Cx486DX? - c

The question says it all. I need to cross-compile for a Cyrix CPU. The system the compiler (doesn't have to be gcc) needs to run on is a 64bit Kubuntu, with an i5 processor. I couldn't find anything useful googling, except for a piece of information saying that "Cx486DX is software-compatible with i486". So I ran
gcc -m32 -march=i486 helloworld.c -o helloworld486.bin
but executing helloworld486.bin on the Cyrix machine gives me a floating point exception. My knowledge about CPUs is rather limited and I'm out of ideas now, any help would be really appreciated.

Unfortunately you need more than just a compiler that generates instructions for the 486. The compiler libraries, as well as any libraries that are linked in statically should be suitable as well. The GCC version included in most current Linux distributions is able to generate 486-only object files (I think), but its libraries and stub objects (e.g. crtbegin.o) have been pre-generated for 686 CPUs.
There are two main alternatives here:
Use a Linux build system that is compiled for 486 itself, either in a VM or in a chroot jail. Unfortunately getting a modern Linux distribution for the 486 is a bit of an issue - every single major distribution has moved on. Perhaps a (much) older Linux distribution would be of help?
Create a full cross-compiler toolchain for the 486. You can then cross-compile separate versions of all needed libraries and have your build scripts use them. Quite honestly, ensuring that nothing from the (usually 686-based) build host slips through to the build result is not very easy. It oftens amounts to cross-compiling a whole Linux system from scratch, ala CLFS.
An automated cross-compiler toolchain build script, such as crosstool-ng might be of help.
Could you add more details about your target system? Is it an embedded system or just an old PC? What OS is it using? Would it be possible to just run your compile in a VM with a version of the target OS?

Related

How to run executable file a.out created in my laptop gcc environment in other laptops?

I have written a program code in c compiled and executed in gcc compiler. I want to share the executable file of program without sharing actual source code. Is there any way to share my program without revealing actual source code so that executable file could run on other computers with gcc compilers??
Is there any way to share my program without revealing actual source code so that executable file could run on other computers with gcc compilers?
TL;DR: yes, provided a greater degree of similarity than just having GCC. One simply copies the binary file and any needed auxiliary files to a compatible system and runs it.
In more detail
It is quite common to distribute compiled binaries without source code, for execution on machines other than the ones on which those binaries were built. This mode of distribution does present potential compatibility issues (as described below), but so does source distribution. In broad terms, you simply install (copy) the binaries and any needed supporting files to suitable locations on a compatible system and execute them. This is the manner of distribution for most commercial software.
Architecture dependence
Compiled binaries are certainly specific to a particular hardware architecture, or in certain special cases to a small, predetermined set of two or more architectures (e.g. old Mac universal binaries). You will not be able to run a binary on hardware too different from what it was built for, but "architecture" is quite a different thing from CPU model.
For example, there is a very wide range of CPUs that implement the x86_64 architecture. Most programs targeting that architecture will run on any such CPU. Indeed, the x86 architecture is similar enough to x86_64 that most programs built for x86 will also run on x86_64 (but not vise versa). It is possible to introduce finer-grained hardware dependency, but you do not generally get that by default.
Operating system dependence
Furthermore, most binaries are built to run in the context of a host operating system. You will not be able to run a binary on an operating system too different from the one it was built for.
For example, Linux binaries do not run (directly) on Windows. Windows binaries do not run (directly) on OS X. Etc.
Library dependence
Additionally, a program built against shared libraries require a compatible version of each required shared library to be available in the runtime environment. That does not necessarily have to be exactly the same version against which it was built; that depends on the library, which of its functions and data are used, and whether and how those changed over time.
You can sidestep this issue by linking every needed library statically, up to and including the C standard library, or by distributing shared libraries along with your binary. It's fairly common to just live with this issue, however, and therefore to support only a subset of all possible environments with your binary distribution(s).
Other
There is a veritable universe of other potential compatibility issues, but it's unlikely that any of them would catch you by surprise with respect to a program that you wrote yourself and want to distribute. For example, if you use nVidia CUDA in your program then it might require an nVidia GPU, but such a requirement would surely be well known to you.
Executable are often specific to the environment/machine they were created on. Even if the same processor/hardware is involved, there may be dependencies on libraries that may prevent executables from just running on other machines.
A program that uses only "standard libraries" and that links all libraries statically, does not need any other dependency (in the sense that all the code it need is in the binary itself or into OS libraries that -being part of the system itself- are already on the system).
You have to link the standard library statically. Otherwise it will only work if the version of the standard library for your compiler is installed in your OS by default (which you can't rely on, in general).

Cross Toolchain for ARM U-Boot Build Questions

I'm trying to build my own toolchain for an Raspberry-Pi.
I know there are plenty of prebuilt Toolchains. This work is for educational reasons.
I'm following the embedded arm linux from scratch book.
And succeeded in building a gcc and uClib so far.
I'm building for the target arm-unknown-linux-eabi.
Now that it comes to preparing a bootable filesystem i'm questioning myself about the bootloader build.
The part about the bootloader for this System seems to be incomplete.
Now I'm questioning myself how do I build a uboot for this System with my arm-unknown-linux-eabi toolchain.
Do I need to build a toolchain which doesn't depend on linux kernel calls.
My first reasearch lead me to the point that there are separate kind of tool chain
the OS dependent (linux kernel sys-calls etc...) and the ones which don't need to have a kernel underneath. Sometimes refered to as "Bare-Metal" toolchain or "standalone" toolchain.
Some sources mention that it would be possible to build an U-Boot with the linux toolchain.
If this is true why and how should this work?
And if I have to build a second toolchain for "Bare Metal" Toolchain where can I find informations about the difference between these two. Do I need another libstdc?
You can built U-Boot with the same cross-toolchain used to build the kernel - and most probably the rest of the user-space of the system.
A bootloader is - by definition - self-contained and doesn't care about your choice of C-runtime library because it doesn't use it. Therefore the issue of sys-calls doesn't come into it.
A toolchain is always going to need to be hosted by a fully functioning development system - invariably not your target system. Whatever references you see to a 'bare-metal toolchain' are not referring to the compiler's use of sys-calls (it relies heavily on the operating system for I/O). What is important when building bootloaders and kernels is that compiler and linker are configured to produce statically linked code that can run at specific memory address.
In almost all possible ways, there is no difference between the embedded and the Linux toolchain. But there is one exception.
That exception is __clear_cache - a function that can be generated by the compiler and in a "Linux"-toolchain includes a system call to synchronize instruction and data caches. (See http://blogs.arm.com/software-enablement/141-caches-and-self-modifying-code/ for more information about that bit.)
Now, unless you explicitly add a call to that function, the only way I know for it to be invoked is by writing nested functions in C (a GCC extension that should be avoided).
But it is a difference.

Cross Compiling On Windows?

I have the GNU GCC compiler for Windows. I read that it is able to function as a cross compiler.
How do I do this? What command option(s) will produce an shared library that can be used by MacOS/Linux platforms?
You need to build your own cross-compiler, i.e. you need to get the GCC sources and compile them with a desired target-architecture. Then you have to cross-compile all the libraries.
The process is fairly involved and lengthy. The usual GNU makefiles are pretty good at supporting this (through HOST, BUILD and ARCH variables), but if possible you should leave this to a higher-level abstraction. crosstool is one such tool that comes to mind, but I don't know if it's available for Windows.
It's possible that you'll be able to find pre-build Windows binaries of GCC on the internet that target a particular architecture.

General questions about GCC and cross compiling

Recently I've been playing around with cross compiling using GCC and discovered what seems to be a complicated area, tool-chains.
I don't quite understand this as I was under the impression GCC can create binary machine code for most of the common architectures, and all that else really matters is what libraries you link with and what type of executable is created.
Can GCC not do all these things itself? With a single build of GCC, all the appropriate libraries and the correct flags sent to GCC, could I produce a PE executable for a Windows x86 machine, then create an ELF executable for an embedded Linux MIPS device and finally an executable for an OSX PowerPC machine?
If not can someone explain how you would achieve this?
With a single build of GCC, all the
appropriate libraries and the correct
flags sent to GCC, could I produce a
PE executable for a Windows x86
machine, then create an ELF executable
for an embedded Linux MIPS device and
finally an executable for an OSX
PowerPC machine? If not can someone
explain how you would achieve this?
No. A single build of GCC produces object code for one target architecture. You would need a build targeting Intel x86, a build targeting MIPS, and a build targeting PowerPC. However, the compiler is not the only tool you need, despite the fact that you can build source code into an executable with a single invocation of GCC. Under the hood, it makes use of the assembler (as) and linker (ld) as well, and those need to be built for the target architecture and platform. Usually GCC uses the versions of these tools from the GNU binutils package, so you'd need to build that for the target platform too.
You can read more about building a cross-compiling toolchain here.
I don't quite understand this as I was
under the impression GCC can create
binary machine code for most of the
common architectures
This is true in the sense that the source code of GCC itself can be built into compilers that target various architectures, but you still require separate builds.
Regarding -march, this does not allow the same build of GCC to switch between platforms. Rather it's used to select the allowable instructions to use for the same family of processors. For example, some of the instructions supported by modern x86 processors weren't supported by the earliest x86 processors because they were introduced later on (such as extension instruction sets like MMX and SSE). When you pass -march, GCC enables all opcodes supported on that processor and its predecessors. To quote the GCC manual:
While picking a specific cpu-type will
schedule things appropriately for that
particular chip, the compiler will not
generate any code that does not run on
the i386 without the -march=cpu-type
option being used.
If you want to try cross-compiling, and don't want to build the toolchain yourself, I'd recommend looking at CodeSourcery. They have a GNU-based toolchain, and their free "Lite" version supports quite a few architectures. I've used it for Linux/ARM and Android/ARM.

How do I cross-compile C code on Windows for a binary to also be run on Unix (Solaris/HPUX/Linux)?

I been looking into Cygwin/Mingw/lcc and I liked to be able to compile perl native C extensions on my windows(preferably under cygwin) and then run them on Solaris and HP unix without any further fuss, is this possible?
This all stems from my original perl cross-platform question here.
(This is a very old question, but missing some useful info --
I've personally done this for Solaris (SPARC & x86), AIX, HP-UX and Linux (x86, x64).)
Getting C++ cross-compiled is much harder than straight C.
HP-UX 32-bit PA-RISC is not supported because it uses SOM format instead of ELF and binutils doesn't (and likely won't ever) support SOM. In other words, you can only cross-compile 64-bit PA-RISC. (Requires PA-RISC 2.0 chip.)
I would go with mingw instead of cygwin, if you can. Cygwin introduces a lot of file permission headaches and cygwin1.dll dependencies that can be troublesome. If possible, however, build on linux. Everything will be much faster because all the tools and scripts you're running are designed for an environment where exec and stat are fast operations. Windows + NTFS is not that environment.
Start with the crosstools script, but be prepared to spend a lot of time on this.
Try with the very latest gcc/binutuils first, but if you can't overcome problems try dropping back to older packages. E.g. for Power3 (AIX) gcc 4.x series cross compiler generates bad code, 3.x is fine.
When copying native libs and headers make sure you are copying from the oldest machine you're likely to run on. Copying a new libc means your code won't run on any machine with an older libc.
When copying native libs and headers you probably want 'tar -h' to turn symlinks into actual files, also watch that on Solaris some requisite crt object files are buried in a cc directory, not under /usr/lib
Cross-compiler are very hard to setup and get working correctly.
Consider that (the people at) NetBSD have to put in a huge amount of work to get cross-compiling to work, and they're running the same OS, just different architectures.
You'd have to, at least, copy all the headers from the other OSs to Windows, and get a cross-compiler, linker etc for the target OS/architecture.
Also that may well not be possible - perl and shared libraries may be compiled with a native/non-gcc compiler which won't be available on Windows at all.
I agree with Douglas, that getting a cross compiler up and working is very hard to do. This is generally, your choice of last resort. If you are boot strapping, or making a binary for an embedded device, then often cross-compiling is your only option. You should be comfortable compiling your own gcc under Cygwin before considering cross compiling. To cross compile, you need to build a gcc to run under windows, but which will create binaries for your execution platform. Sample instructions for doing this can be found here.
Perhaps you are wanting to cross compile because you don't have root and/or can't compile on your target platform. For example, I had a hosting provider which ran Redhat Linux. I could run Perl CGI scripts, and associated modules, but I could not compile on the target machine, and an libraries I built had to exist in my own directory.
To solve this, I could have attempted to cross compile for my target platform, but instead, I decided to setup a similar host inside a VM on Windows. From within Cygwin, you can create a script which ssh's into your VM, copies your source, and does a full configure/build. The last step was to deploy the binary artifact onto my hosted system.
I've successfully had both Solaris 10 and Open Solaris running within a VM on Windows. Unfortunately, you might have a harder time running HPUX under a VM.
Why don't you have a read up on "Grand Unified Builder" (http://lilypond.org/gub/ and http://valentin.villenave.info/The-LilyPond-Report-11 (section #4))
I don't know how it works, but GUB allows the Lilypond developers to compile for about 11 platforms on a linux box.
Compile on Windows then use Wine to run them on any *nix. It works well most of the time.
No, this isn't possible at the binary level. There are so many differences at binary level between the various OSes and CPUs.
But what you can do is make the your C extensions source compatible so that it can compile to different platforms. C was designed as a "portable assembly language". As long as you stick with routines that are cross-platform, then they will usually work the same. You'll still need to test because there could be bugs that exists on particular platform.
This can't be done ... but is it that much of a hassle to recompile the code under Solaris or HP?

Resources