tl;dr
I can't compile glibc on powerpc/mips/armeb/sparc. How can I test bigendian without emulating a whole system?
Problem Description
I am currently trying to test some code to ensure that it works on a big-endian system (it doesn't) and fix any big-endian errors. Currently I am using qemu-system-ppc with a PowerPC debian image and compiling within that image, as well as running gdb there, etc. However, this is extremely inefficient and I know there is a better way.
As such I followed this tutorial to create a cross compiler so I could compile my source to a big-endian system, then run the user-space qemu-??? to test it without the overhead of running the entire OS in emulated space (in particular with the -S switch I could run gdb on host which is much faster).
Attempt #1:
First try was powerpc. At first I just ran the configure with --target=powerpc but realized that I should use a full triplet and went with powerpc-unknown-linux-gnu. Unfortunately, when I get to the glibc portion I run into the problem that my gcc does not support IBM 128-bit long doubles (it only supports IEEE). I googled around and there doesn't seem to be a fix (aside from updating gcc past v4.1 - I'm at 6.2 right now).
Attempt #2:
Let's try mips then, I have more experience with it anyways (from university). This makes it to the glibc portion again, but this time it complains that it can't determine the ABI. I tried it with all of -linux-gnu -gneabi -eabi and -none but it couldn't determine the ABI for any of them.
Attempt #3:
Alright, time to try armeb. This time it gets past determining the ABI, then dumps a message saying that the target is not yet supported by glibc. Same story with sparc.
Question
Given that all of the above failed - how can I compile a cross-compiler from my little-endian system to a big-endian system which will generate executables I can run in user-space with qemu?
I have NetBSD5.1 source. I have compiled the kernel and userland with the source. When I native compile a sample C program with pthread_create() in ARM NetBSD5.1, it is crashing. Same program is running successfuly in my Linux PC. Want to know if Pthread is supported in ARM machine which run NetBSD5.1 OS?
Note: other sample C programs native compiled in ARM machine runs successfully.
It should work, I think. (I don't have an ARM system running 5.1 at the moment -- mine are running a pre-7.0 -current.)
If you can show some more details about the crash, e.g. stack backtraces from the debugger, then perhaps I or someone else can provide more help.
The question says it all. I need to cross-compile for a Cyrix CPU. The system the compiler (doesn't have to be gcc) needs to run on is a 64bit Kubuntu, with an i5 processor. I couldn't find anything useful googling, except for a piece of information saying that "Cx486DX is software-compatible with i486". So I ran
gcc -m32 -march=i486 helloworld.c -o helloworld486.bin
but executing helloworld486.bin on the Cyrix machine gives me a floating point exception. My knowledge about CPUs is rather limited and I'm out of ideas now, any help would be really appreciated.
Unfortunately you need more than just a compiler that generates instructions for the 486. The compiler libraries, as well as any libraries that are linked in statically should be suitable as well. The GCC version included in most current Linux distributions is able to generate 486-only object files (I think), but its libraries and stub objects (e.g. crtbegin.o) have been pre-generated for 686 CPUs.
There are two main alternatives here:
Use a Linux build system that is compiled for 486 itself, either in a VM or in a chroot jail. Unfortunately getting a modern Linux distribution for the 486 is a bit of an issue - every single major distribution has moved on. Perhaps a (much) older Linux distribution would be of help?
Create a full cross-compiler toolchain for the 486. You can then cross-compile separate versions of all needed libraries and have your build scripts use them. Quite honestly, ensuring that nothing from the (usually 686-based) build host slips through to the build result is not very easy. It oftens amounts to cross-compiling a whole Linux system from scratch, ala CLFS.
An automated cross-compiler toolchain build script, such as crosstool-ng might be of help.
Could you add more details about your target system? Is it an embedded system or just an old PC? What OS is it using? Would it be possible to just run your compile in a VM with a version of the target OS?
I have such setup. I need to program on some embedded device which in spec says to run Linux (although when you turn on the device, clearly the display doesn't show anything linux related - small display).
The embedded device has its own SDK.
Now, I thought using valgrind to check for memory management/allocation.
Can I use valgrind to check a program written for my device?
The problem I see is that the program might contain some device specific SDK calls, hence the program might not run on ordinary fedora linux that I run on my desktop for example.
What are my options?
Running valgrind on embedded devices can be quite challenging, if not impossible.
What you can do is to create unit tests, and execute them using valgrind on the host platform. That is a way to at least check memory problems of part of the code.
Other option is to use platform emulation, and run programs in emulators (again on the host system). QEMU is quite famous open source emulator.
Perhaps.
Make sure you really run Linux, of course.
Figure out the hardware platform; Valgrind supports quite a few platforms but not all.
Consider whether your platform has resources (memory and CPU speed) to spare; running Valgrind is quite costly.
If all of those check out ok, then you should be able to run Valgrind, assuming of course you can get it onto the target machine. You might need to build and install it yourself, of course.
I assume you have some form of terminal/console access, i.e. over serial port, telnet, or something that you can use to run programs on the target.
UPDATE: Based on feedback in comments, I'm starting to doubt the possibility for you to run Valgrind on your particular device.
I don't quite understand the compiling process of the Linux kernel when I install
a Linux system on my machine.
Here are some things that confused me:
The kernel is written in C, however how did the kernel get compiled without a compiler installed?
If the C compiler is installed on my machine before the kernel is compiled, how can the compiler itself get compiled without a compiler installed?
I was so confused for a couple of days, thanks for the response.
The first round of binaries for your Linux box were built on some other Linux box (probably).
The binaries for the first Linux system were built on some other platform.
The binaries for that computer can trace their root back to an original system that was built on yet another platform.
...
Push this far enough, and you find compilers built with more primitive tools, which were in turn built on machines other than their host.
...
Keep pushing and you find computers built so that their instructions could be entered by setting switches on the front panel of the machine.
Very cool stuff.
The rule is "build the tools to build the tools to build the tools...". Very much like the tools which run our physical environment. Also known as "pulling yourself up by the bootstraps".
I think you should distinguish between:
compile, v: To use a compiler to process source code and produce executable code [1].
and
install, v: To connect, set up or prepare something for use [2].
Compilation produces binary executables from source code. Installation merely puts those binary executables in the right place to run them later. So, installation and use do not require compilation if the binaries are available. Think about ”compile” and “install” like about “cook” and “serve”, correspondingly.
Now, your questions:
The kernel is written in C, however how did the kernel get compiled without a compiler installed?
The kernel cannot be compiled without a compiler, but it can be installed from a compiled binary.
Usually, when you install an operating system, you install an pre-compiled kernel (binary executable). It was compiled by someone else. And only if you want to compile the kernel yourself, you need the source and the compiler, and all the other tools.
Even in ”source-based” distributions like gentoo you start from running a compiled binary.
So, you can live your entire life without compiling kernels, because you have them compiled by someone else.
If the C compiler is installed on my machine before the kernel is compiled, how can the compiler itself get compiled without a compiler installed?
The compiler cannot be run if there is no kernel (OS). So one has to install a compiled kernel to run the compiler, but does not need to compile the kernel himself.
Again, the most common practice is to install compiled binaries of the compiler, and use them to compile anything else (including the compiler itself and the kernel).
Now, chicken and egg problem. The first binary is compiled by someone else... See an excellent answer by dmckee.
The term describing this phenomenon is bootstrapping, it's an interesting concept to read up on. If you think about embedded development, it becomes clear that a lot of devices, say alarm clocks, microwaves, remote controls, that require software aren't powerful enough to compile their own software. In fact, these sorts of devices typically don't have enough resources to run anything remotely as complicated as a compiler.
Their software is developed on a desktop machine and then copied once it's been compiled.
If this sort of thing interests you, an article that comes to mind off the top of my head is: Reflections on Trusting Trust (pdf), it's a classic and a fun read.
The kernel doesn't compile itself -- it's compiled by a C compiler in userspace. In most CPU architectures, the CPU has a number of bits in special registers that represent what privileges the code currently running has. In x86, these are the current privilege level bits (CPL) in the code segment (CS) register. If the CPL bits are 00, the code is said to be running in security ring 0, also known as kernel mode. If the CPL bits are 11, the code is said to be running in security ring 3, also known as user mode. The other two combinations, 01 and 10 (security rings 1 and 2 respectively) are seldom used.
The rules about what code can and can't do in user mode versus kernel mode are rather complicated, but suffice to say, user mode has severely reduced privileges.
Now, when people talk about the kernel of an operating system, they're referring to the portions of the OS's code that get to run in kernel mode with elevated privileges. Generally, the kernel authors try to keep the kernel as small as possible for security reasons, so that code which doesn't need extra privileges doesn't have them.
The C compiler is one example of such a program -- it doesn't need the extra privileges offered by kernel mode, so it runs in user mode, like most other programs.
In the case of Linux, the kernel consists of two parts: the source code of the kernel, and the compiled executable of the kernel. Any machine with a C compiler can compile the kernel from the source code into the binary image. The question, then, is what to do with that binary image.
When you install Linux on a new system, you're installing a precompiled binary image, usually from either physical media (such as a CD DVD) or from the network. The BIOS will load the (binary image of the) kernel's bootloader from the media or network, and then the bootloader will install the (binary image of the) kernel onto your hard disk. Then, when you reboot, the BIOS loads the kernel's bootloader from your hard disk, and the bootloader loads the kernel into memory, and you're off and running.
If you want to recompile your own kernel, that's a little trickier, but it can be done.
Which one was there first? the chicken or the egg?
Eggs have been around since the time of the dinosaurs..
..some confuse everything by saying chickens are actually descendants of the great beasts.. long story short: The technology (Egg) was existent prior to the Current product (Chicken)
You need a kernel to build a kernel, i.e. you build one with the other.
The first kernel can be anything you want (preferably something sensible that can create your desired end product ^__^)
This tutorial from Bran's Kernel Development teaches you to develop and build a smallish kernel which you can then test with a Virtual Machine of your choice.
Meaning: you write and compile a kernel someplace, and read it on an empty (no OS) virtual machine.
What happens with those Linux installs follows the same idea with added complexity.
It's not turtles all the way down. Just like you say, you can't compile an operating system that has never been compiled before on a system that's running that operating system. Similarly, at least the very first build of a compiler must be done on another compiler (and usually some subsequent builds too, if that first build turns out not to be able to compile its own source code just yet).
I think the very first Linux kernels were compiled on a Minix box, though I'm not certain about that. GCC was available at the time. One of the very early goals of many operating systems is to run a compiler well enough to compile their own source code. Going further, the first compiler was almost certainly written in assembly language. The first assemblers were written by those poor folks who had to write in raw machine code.
You may want to check out the Linux From Scratch project. You actually build two systems in the book: a "temporary system" that is built on a system you didn't build yourself, and then the "LFS system" that is built on your temporary system. The way the book is currently written, you actually build the temporary system on another Linux box, but in theory you could adapt it to build the temporary system on a completely different OS.
If I am understanding your question correctly. The kernel isn't "compiling itself" these days. Most Linux distributions today provide system installation through a linux live cd. The kernel is loaded from the CD into memory and operates as it would normally as if it were installed to disk. With a linux environment up and running on your system it is easy to just commit the necessary files to your disk.
If you were talking about the bootstrapping issue; dmckee summed it up pretty nice.
Just offering another possibility...