what is armel and how armel relate to arm? - arm

At link http://talk.maemo.org/showthread.php?t=9081 I found that interpret armel as little endian ARM is wrong. But what in this case is armel?

In the context of Maemo and Debian architecture names, it refers to a binary-incompatible change in the ABI (the function-calling and return-value conventions) which necessitated a complete new port of Debian.
https://wiki.debian.org/ArmEabiPort will tell you far more about the differences than you ever wanted to know. The bottom line is that *_arm.deb and *_armel.deb are two incompatible ports, and *_armel.deb is 11 times faster when doing floating point, as well as allowing you to compile your own applications using hardfloat (precisely, -mfloat-abi=softfp) and link then with the softfloat libraries in your generic distro to gain a further 3 to 7 times speed increase.

It's ARM running in little-endian mode.

Related

Compiled on 32-bit and 64-bit but checksum is different

Two binaries were compiled. One is on a 32-bit Windows 7 machine and other is on a 64-bit Windows 10. All the source files and dependencies are the same; however, after compilation the checksum was compared and they are different. Could someone provide an explanation as to why?
If you're comparing a 32-bit build and a 64-bit build you'll need to keep in mind that the compiled code will be almost completely different. 64-bit x86_64 and 32-bit x86 machine code will not only vary considerably, but even if that was not the case, remember that pointers are twice the size in 64-bit code, so a lot of code will be structured differently and addresses in the code will be expressed as bigger pointers.
The technical reason is the machine code is not the same between different architectures. Intel x86, and x86_64 have considerable variation in how the opcodes are expressed, and in how programs are structured internally. If you want to read up on the differences there's a lot of published material that can explain, like Intel's own references.
As others have pointed out building twice on the same machine may not even produce a byte-for-byte matching binary. There will be slight differences in it, especially if there's a code-signing step.
In short, you can't do this and expect them to match.
All the source files and dependencies are the same; however, after compilation the checksum was compared and they are different.
The checksum is calculated from the binary code that the compiler produces, not the source code. If anything is different, you should get a different checksum. Changes in the source code will change the binary, so a different checksum. But using a different compiler will also produce a different binary. Using the same compiler with different options will produce a different binary. Compiling the same program with the same compiler and the same compiler options on a different machine might produce a different binary. Compiling the same program for different processor architectures will definitely produce a different binary.

How are traps generated for floating point exceptions?

I want to know which code and files in the glibc library are responsible for generating traps for floating point exceptions when traps are enabled.
Currently, GCC for RISC-V does not trap floating point exceptions. I am interested in adding this feature. So, I was looking at how this functionality is implemented in GCC for x86.
I am aware that we can trap signals as described in this [question]
(Trapping floating-point overflow in C) but I want to know more details about how it works.
I went through files in glibc/math which according to me are in some form responsible for generating traps like
fenv.h
feenablxcpt.c
fegetexpect.c
feupdateenv.c
and many other files starting with fe.
All these files are also present in glibc for RISC-V. I am not able to
figure out how glibc for x86 is able to generate traps.
These traps are usually generated by the hardware itself, at the instruction set architecture (ISA) level. In particular on x86-64.
I want to know which code and files in the glibc library are responsible for generating traps for floating point exceptions when traps are enabled.
So there are no such file. However, the operating system kernel (notably with signal(7)-s on Linux...) is translating traps to something else.
Please read Operating Systems: Three Easy Pieces for more. And study the x86-64 instruction set in details.
A more familiar example is the integer division by zero. On most hardware, that produces a machine trap (or machine exception), handled by the kernel. On some hardware (IIRC, PowerPC), its gives -1 as a result and sets some bit in a status register. Further machine code could test that bit. I believe that the GCC compiler would, in some cases and with some optimizations disabled, generate such a test after every division. But it is not required to do that.
The C language (read n1570, which practically is the C11 standard) has defined the notion of undefined behavior to handle such situations the most quickly and simply possible. Read Lattner's What every C programmer should know about undefined behavior blog.
Since you mention RISC-V, read about the RISC philosophy of the previous century, and be aware that designing out-of-order and super-scalar processors requires a lot of engineering efforts. My guess is that if you invest as much R&D (that means tens of billions of US$ or €) as Intel -or, to a lesser extent, AMD- did on x86-64 into a RISC-V chip, you could get comparable performance to current x86-64 processors. Notice that SPARC or PowerPC (or perhaps ARM) chips are RISC-like, and their best processors are nearly comparable in performance to Intel chips but got probably ten times less R&D investment than what Intel put in its microprocessors.

Banana Pi & Buildroot

I'm trying to create a customized FS and eventually a kernel for my Banana Pi using BuildRoot.
The fact is I'm new to it. Banana Pi isn't part of the pre-made configurations.
My main problem is that can't find the specific hardware specs I'm searching for.
My CPU is an Allwinner A20 SoC, which has an ARM architecture. But is it Big or Little endian ?
1. What is the "Target ABI" ?
2. What is its "Floating point strategy" ?
Thanks for your answers !
For both, you can chose whatever you want, since Buildroot will only show you what's compatible with the ARM core you've selected (in the case of the Allwinner A20, the ARM is Cortex-A7).
For the target ABI, you have the choice between EABI and EABIhf. Unless you need to use some pre-built binaries that are built with the EABI ABI, I would suggest you to use EABIhf, since it allows to pass floating point arguments directly in floating point registers (rather than through integer registers), which makes calling floating point related functions slightly more efficient.
For the Floating point strategy, use the higest VFPvX available for your ARM core.

Reverse engineering a firmware - what's up with every fourth byte?

So I decided to grab my tools and analyze a router firmware. It went pretty okay up to the point where I had to find segments manually. I wouldn't bother you with it and i really don't want to ask about hacking anything or to do a favor for me. There is a pattern I'm sure someone could explain to me.
Looking at the hexdump, all i see is this:
There are strings that break the pattern but it goes all the way down almost to the end of the file.
what on earth can cause this pattern?
(if anyone's willing to help but needs more info: VxWorks 5.5.1 / probably ARM-9E CPU)
it is an arm, go look at the arm documentation you will see that for the 32 bit (non-thumb) arm instructions the first four bits are the condition code. The code 0b1110 is "ALWAYS" most of the time you dont do conditional execution so most arm instructions start with 0xE. makes it very easy to pick out an arm binary. the 16 bit thumb instructions also have a similar pattern but for different reasons, then if you add in thumb2 it changes that some...
Thats just due to how ARMs op codes are mapped and is actually helps me "eyeball" a dump to see if its ARM code.
I would suggest you go through part of the ARM Architecture Manual to see how op codes are generated. particularly conditionals. the E is created when you always want something to happen

Transforming executables, objects or binaries between different architectures

Beforehand: This is just a nasty idea I had this night :-)
Think about the following scenario:
You have some arm-elf executable and for some reasons you want to run it on your amd64 box without emulating.
To simplify the scenario, let's say we just want to deal with simple console applications which are just linked against libc and there are no additional architecture specific requirements.
If you want to transform binaries between different architectures you have to consider the following points:
Endianess of the architectures
bit-width of registers
functionality of different registers
Endianess should be one of the lesser problems.
If the bit-width of the destination registers is smaller then those of the source architecture one could insert additional instructions to represent the same behaviour. The same applies to the functionality of registers.
Finally, (and before bashing down this idea), have a look at the following simple code snippet and its corresponding disassembly of the objects.
C Code
Corresponding ARM Disassembly
Corresponding AMD64 Disassembly
In my opinion it should be possible to convert those objects between different architectures. Even function calls (like printf) could be mapped or wrapped to the destination architecture's libc.
And now my questions:
Did anyone already think about realising this?
Is it actually possible?
Are there already some projects dealing with this issue?
Thanks in advance!

Resources