As part of learning MIPS assembly, I want to cross compile some C source files with LCC and then disassemble them. I found a MIPS disassembler that runs on Windows, but it says in the description:
Disassembles pure memory dumps (raw code) and GCC object files
I know that for x86 there are multiple executable / object file formats depending on the target OS. Is this the case for MIPS? Do you think this will work? Or am I going to be stuck having to install a linux distro so that I can use one of the precompiled gcc MIPS toolchains like CodeSourcery?
It looks like lcc supports the -S compiler option, which emits the assembly output. Perhaps that would save you some effort?
Executable formats are always according to the operating system and compiler (which obviously must work hand-in-hand). If you decide you need to use gcc, http://faculty.cs.tamu.edu/bettati/Courses/410/2006C/Projects/gxemulcygwin.html appears to discuss a means of installing one under Win32 through cygwin.
To answer the original question, Linux on MIPS uses ELF format files for executables and shared objects. Bare metal MIPS systems would likely use some form of memory dump.
The objdump utility from the GNU codesourcery toolchain will disassemble ELF files and GCC .o files.
Related
Considering that C is a systems programming language, how can I compile C code into raw x86 machine code that could be invoked without the presence of an operating system? (IE: You can assume I have a boot sector that loads the raw machine code from disk into memory then jumps directly to the first instruction).
And now, for bonus points: Ideally, I'd like to compile using Visual Studio 2010's compiler because I've already got it. Failing that, what's the best way to accomplish the task, without having to install a bunch of dependencies or having to make large sweeping configuration changes across my entire system? I'd be compiling on Windows 7.
Usually, you don't. Instead, you compile your code normally, and then (either with the linker or some other tool) extract a raw binary from the object file.
For example, on Linux, you can use the objcopy tool to copy an object file to a raw binary file.
$ objcopy -O binary object.elf object.binary
First off you dont use any libraries that require a system call (printf, fopen, read, etc). then you compile the C files normally. the major difference is the linker step, if you are used to letting the c compiler call the linker (or letting some gui do it) you will likely need to take over that manually in some form. The specific solution depends on your tools, you will need to have some bootstrap code (the small amount of assembly that is needed to cover the assumptions of C compilers and programmers and launch the entry point in your C program), and a linker script or the right command line options for the linker to control the address space for the binary as well as to link the objects together. Then depending on the output format of the linker you might have to convert it to some other binary format (intel hex, srec, exe, com, coff, elf, raw binary, etc) to be compatible with wherever it is going to be loaded or run.
I am trying to use SSE4.2 intrinsics with clang/llvm but its not compiling, as I get cannot select intrinsic error from LLVM. On the other hand, the same code compiles flawlessly in gcc. So I thought, maybe I can compile that function with gcc, so as to have an object or library file, and then call that library function in my code, which is compiled by clang/llvm. Would that work?
It's possible to compile an object file with GCC in Linux and convert it to work in Visual Studio. I did this recently running Linux in Virtual Box on Windows converting-c-object-file-from-linux-o-to-windows-obj so this should be possible with Clang on Linux or Windows as well.
So not only can this be done cross compiler it can be done cross platform.
You need to get the calling conventions and the object file format correct (and for C++ the name mangling as well) . With GCC when you compile you can tell it which calling convention/API to use with mabi. Then, if going from Linux to Windows, you need an object file converter to convert from e.g. ELF on Linux to COFF on Windows. Of course, there are cases this probably won't work (e.g. if the module relies on a system call that is only in one platform). See the link above for more details.
For any more-or-less complicated c++ code, e.g., one that compiles to vtable - the answer is a resounding NO. The two are NOT compatible.
To illustrate the above point, try to compile the Crypto++ library with g++ (gains about 40% speedup for AES/GCM) and then link your clang++-compiled code with it.
It may or it may not work. Some elements of the ABI can be expected to be the same. For example, I believe both g++ and clang use the Itanium ABI name mangling scheme. Others elements can not. So it depends on how complex the code you're compiling is.
Also, I would suggest opening an LLVM bug for the intrinsic that could not be selected. Clang and LLVM have a very active community, and it's possible someone will pick the bug up quickly.
I have a x86 development machine and developing kernel module for mips. I wanted to disassemble a routine to find problems with the module.
So my question is
"Can I disassemble it on x86 machine or I will have to get a MIPS development machine ?"
I tried it, but it disassembles in x86 instruction set.
You basically need some form of cross compilation. A cross compiler would allow you to compile on a host machine (x86 in your case) for a target machine (MIPS in your case). So you would be able to generate MIPS binaries from your x86 machine. Moreover, you would also get all the other tools associated to the compiler, such as objdump. Here you have a guide on how to build a cross compiler for GCC.
Assuming you are using objdump to disassemble a binary, you may not need to build a cross compiler. objdump belongs to binutils and it may be possible to just compile binutils for using MIPS as the target (I have never tried to create a cross-platform build of binutils, so I am not 100% sure).
EDIT: I just read the title again, and realized that you are using gdb. In that case I believe you would need to create a full cross compiler, and create a cross-platform version of gdb.
I have the GNU GCC compiler for Windows. I read that it is able to function as a cross compiler.
How do I do this? What command option(s) will produce an shared library that can be used by MacOS/Linux platforms?
You need to build your own cross-compiler, i.e. you need to get the GCC sources and compile them with a desired target-architecture. Then you have to cross-compile all the libraries.
The process is fairly involved and lengthy. The usual GNU makefiles are pretty good at supporting this (through HOST, BUILD and ARCH variables), but if possible you should leave this to a higher-level abstraction. crosstool is one such tool that comes to mind, but I don't know if it's available for Windows.
It's possible that you'll be able to find pre-build Windows binaries of GCC on the internet that target a particular architecture.
Recently I've been playing around with cross compiling using GCC and discovered what seems to be a complicated area, tool-chains.
I don't quite understand this as I was under the impression GCC can create binary machine code for most of the common architectures, and all that else really matters is what libraries you link with and what type of executable is created.
Can GCC not do all these things itself? With a single build of GCC, all the appropriate libraries and the correct flags sent to GCC, could I produce a PE executable for a Windows x86 machine, then create an ELF executable for an embedded Linux MIPS device and finally an executable for an OSX PowerPC machine?
If not can someone explain how you would achieve this?
With a single build of GCC, all the
appropriate libraries and the correct
flags sent to GCC, could I produce a
PE executable for a Windows x86
machine, then create an ELF executable
for an embedded Linux MIPS device and
finally an executable for an OSX
PowerPC machine? If not can someone
explain how you would achieve this?
No. A single build of GCC produces object code for one target architecture. You would need a build targeting Intel x86, a build targeting MIPS, and a build targeting PowerPC. However, the compiler is not the only tool you need, despite the fact that you can build source code into an executable with a single invocation of GCC. Under the hood, it makes use of the assembler (as) and linker (ld) as well, and those need to be built for the target architecture and platform. Usually GCC uses the versions of these tools from the GNU binutils package, so you'd need to build that for the target platform too.
You can read more about building a cross-compiling toolchain here.
I don't quite understand this as I was
under the impression GCC can create
binary machine code for most of the
common architectures
This is true in the sense that the source code of GCC itself can be built into compilers that target various architectures, but you still require separate builds.
Regarding -march, this does not allow the same build of GCC to switch between platforms. Rather it's used to select the allowable instructions to use for the same family of processors. For example, some of the instructions supported by modern x86 processors weren't supported by the earliest x86 processors because they were introduced later on (such as extension instruction sets like MMX and SSE). When you pass -march, GCC enables all opcodes supported on that processor and its predecessors. To quote the GCC manual:
While picking a specific cpu-type will
schedule things appropriately for that
particular chip, the compiler will not
generate any code that does not run on
the i386 without the -march=cpu-type
option being used.
If you want to try cross-compiling, and don't want to build the toolchain yourself, I'd recommend looking at CodeSourcery. They have a GNU-based toolchain, and their free "Lite" version supports quite a few architectures. I've used it for Linux/ARM and Android/ARM.