Detecting Thumb-2 instruction and location of PC offset - arm

i'm kinda new to ARM and i am trying to understand how instructions are interpreted/executed:
From what i know, on ARM is quite simple since every instruction takes up 4 bytes and it's all aligned by 4 bytes also.
The problem comes with Thumb-2 where their instructions can be both 16/32bit long. I've read that to determine if the current instruction is 16/32 bits long the processor reads a word (32bit) and evaluates the first half-word on certain bits [15:11]. If those bits are 0b11101/0b11110/0b11111 then that halfword is the first halfword of a 32 bit instruction else it's a 16bit instruction (I don't quite get why those specific bytes determine that). So an example should be:
0x4000 16-bit
0x4002 32-bit
0x4006 16-bit
0x4008 16-bit
0x400a 32-bit
Then the processor should grab from 0x4000 to 0x4004, evaluate the first half-word (0x4000 to 0x4002) and if the instruction is 16 bit then it just jumps to the next half-word and repeats the process but if the half-word indicates a 32bit address then it skips the next half-word and executes that 32bit instruction?
Also, i'm confused on where does PC point in thumb-2, is it still two instructions further?

Most of us don't/won't know exactly how it is implemented in the logic (and there are various cores so each could be different). But what used to be undefined instructions became thumb-2 extensions a couple dozen in armv6-m then like 150 new ones in armv7-m.
Think of the processor fetching 16 bit instructions, and sometimes it runs across a variable length one. Just like other variable length processors, the x86 will look at the one byte instruction then based on that it may or may not need to look at the next byte and so on until it has resolved the whole instruction. Same here, it looks at a halfword determines if it has everything it needs, if not it grabs the next halfword for the rest of the information.
0x4000 16-bit
0x4002 32-bit
0x4006 16-bit
0x4008 16-bit
0x400a 32-bit
the processor grabs 0x4000 sees it has what it needs, executes. The processor grabs 0x4002, sees it needs another halfword, grabs 0x4004, executes. processor grabs 0x4006 has what it needs executes. grabs 0x4008 has what it needs executes. grabs 0x400A sees it needs another halfword, grabs 0x400C, executes.
Those bit patterns were formerly undefined instructions, now they are part of the definition of a variable length instruction. Just like instructions that start with 0b010000 are data processing instructions and to determine is it an add or an xor, you have to look at other bits. These bit patterns define thumb-2 extensions then other bits in those two half words define what the full instruction is.
Why these bit patterns? You can think of it is arbitrary if you want, all instruction sets someone(/group) sat down and decided what bit patterns where going to mean what, no different here. There was room in the instruction set space with certain patterns so those were used. Not uncommon to add instructions later in the life of a processor family, take x86 for example. Plus many others, for an 8 bitter like x86 or 6502 or whatever you can either consume an 8 bit instruction/opcode as your next new instruction or you take that formerly unused byte/opcode and expand it into many more for example you take a byte/opcode that was unused and that byte now means look at the next byte, that next byte could be up to 256 new instructions or it could simply supplement the first byte specifying registers or operations, etc. No different here, down the road arm extended the thumb instruction set, some percentage of the instruction is consumed indicating this is a variable length instruction, but of those 32 bits there still remains quite a few bits to allow for a larger instruction with more options. (but losing the one to one relationship between thumb and arm instructions, all thumb instructions (not thumb-2 extensions) map directly into a full sized arm instruction).
Each core is different they don't all fetch a word at a time, thumb-2 extensions don't have to be aligned so a whole thumb-2 instruction won't necessarily fit in an aligned word fetch for the processors that do word fetches. Think of the (pre)fetcher and decoder as two separate things, since they are, functionally the decoder takes 16 bits at a time in thumb mode, how is it specifically implemented? I don't know. Do they wait for two half words to be ready before decoding the first? I don't know. Is every implementation the same? I don't know, would expect not. As far as fetching goes they are not the same as you can see in the ARM documentation and I think at least one if not more the chip vendor can choose at compile time.
If you are coming from for example a MIPS based textbook and trying to understand other processors, this can be confusing, understand that those text books and terms are for understanding and vocabulary, pipelines are not that depth in general and you don't fetch whole instructions at a time in general (the x86 does not fetch one byte at a time, it fetches MANY instructions at a time). Risc-v has even worse of a problem than arm and mips as you can have 16 bit compressed instructions, 32 bit instructions, and 64 bit instructions, the 32 bit instructions do not have to be aligned on a risc-v (nor the 64 bit) so fetching 32 at a time doesn't get you a whole instruction, the fetcher is separate from the decoder once enough is there then the decoder can complete.
I want to say that thumb is two ahead (independent of a thumb2 extension or not) so pc+4, should be easy to figure out though.
Disassembly of section .text:
00000000 <hello-0xe>:
0: e005 b.n e <hello>
2: bf00 nop
4: bf00 nop
6: f000 b802 b.w e <hello>
a: bf00 nop
c: bf00 nop
0000000e <hello>:
e: bf00 nop
Yes, so two thumb sized halfwords ahead (pc+4) in both cases. It would be significantly more complicated if it were two instructions ahead which is how it used to be to make it easy to remember. If it were two instructions ahead then sometimes pc+4, sometimes pc+6, and sometimes pc+8 the logic would have to decode two instructions in order to know how the pc was offset for the first of the two, so sticking with pc+4 as it has always been for thumb mode is the sane way to do it.

Related

Disassembly of a mixed ARM/Thumb2 ELF file

I'm trying to disassemble an ELF executable which I compiled using arm-linux-gnueabihf to target thumb-2. However, ARM instruction encoding is making me confused while debugging my disassembler. Let's consider the following instruction:
mov.w fp, #0
Which I disassembled using objdump and hopper as a thumb-2 instruction. The instruction appears in memory as 4ff0000b which means that it's actually0b00f04f (little endian). Therefore, the binary encoding of the instruction is:
0000 1011 0000 0000 1111 0000 0100 1111
According to ARM architecture manual, it seems like ALL thumb-2 instructions should start with 111[10|01|11]. Therefore, the above encoding doesn't correspond to any thumb-2 instruction. Further, it doesn't match any of the encodings found on section A8.8.102 (page 484).
Am I missing something?
I think you're missing the subtle distinction that wide Thumb-2 encodings are not 32-bit words like ARM encodings, they are a pair of 16-bit halfwords (note the bit numbering above the ARM ARM encoding diagram). Thus whilst the halfwords themselves are little-endian, they are still stored in 'normal' order relative to each other. If the bytes in memory are 4ff0000b, then the actual instruction encoded is f04f 0b00.
thumb2 are extensions to the thumb instruction set, formerly undefined instructions, now some of them defined. arm is a completely different instruction set. if the toolchain has not left you clues as to what code is thumb vs arm then the only way to figure it out is start with an assumption at an entry point and disassemble in execution order from there, and even there you might not figure out some of the code.
you cannot distinguish arm instructions from thumb or thumb+thumb2 extension simply by bit pattern. also remember arm instructions are aligned on 4 byte boundaries where thumb are 2 byte and a thumb 2 extension doesnt have to be in the same 4 byte boundary as its parent thumb, making this all that much more fun. (thumb+thumb2 is a variable length instruction set made from multiples of 16 bit values)
if all of your code is thumb and there are no arm instructions in there then you still have the problem you would have with a variable length instruction set and to do it right you have to walk the code in execution order. For example it would not be hard to embed a data value in .text that looks like the first half of a thumb2 extension, and follow that by a real thumb 2 extension causing your disassembler to go off the rails. elementary variable word length disassembly problem (and elementary way to defeat simple disassemblers).
16 bit words A,B,C,D
if C + D are a thumb 2 instruction which is known by decoding C, A is say a thumb instruction and B is a data value which resembles the first half of a thumb2 extension then linearly decoding through ram A is the thumb instruction B and C are decoded as a thumb2 extension and D which is actually the second half of a thumb2 extension is now decoded as the first 16 bits of an instruction and all bets are off as to how that decodes or if it causes all or many of the following instructions to be decoded wrong.
So start off looking to see if the elf tells you something, if not then you have to make passes through the code in execution order (you have to make an assumption as to an entry point) following all the possible branches and linear execution to mark 16 bit sections as first or additional blocks for instructions, the unmarked blocks cannot be determined necessarily as instruction vs data, and care must be taken.
And yes it is possible to play other games to defeat disassemblers, intentionally branching into the second half of a thumb2 instruction which is hand crafted to be a valid thumb instruction or the begnning of a thumb2.
fixed length instruction sets like arm and mips, you can linearly decode, some data decodes as strange or undefined instructions but your disassembler doesnt go off the rails and fail to do its job. variable length instruction sets, disassembly at best is just a guess...the only way to truly decode is to execute the instructions the same way the processor would.

Why is PC still 32 bits in Thumb mode on ARM processor?

As Thumb mode can reference 16-bit of the general registers (r1-r14), why PC (r15) is still 32-bit? Will b Label in this mode update all 32 bits of PC, or just lower 16 bits?
Despite the question being based on an entirely incorrect premise, there is a cheeky sort-of-answer to the title question taken literally - being in Thumb mode means you're on a v4T or later architecture which means you must have a full 32 bits of PC, rather than the pre-v3 architectures where it was only 26 bits.
I'm not sure where you got the idea that Thumb operates on 16 bits of a register - the restriction with must Thumb instructions is that they can only access the "low registers" r0-r7, due to limited encoding space. There are 3 different "32-bit"s at play in ARM - the register width, the address space (which wasn't always the case as mentioned above), and the size of the fixed-width instruction encoding. Thumb only changes the third one. One result of this size reduction is that most Thumb encodings only have 3 bits per operand to encode register numbers - only a handful of instructions can spare the extra bits to encode the "high registers" r8-r15.
In operation, Thumb instructions are no different to the equivalent subset of ARM instructions - Thumb is just an alternative fetch/decode stage on the front of the same pipeline, after all.
thumbv1 instructions are 16 bits in size but what does that have to do with the size of the registers? (Nothing) the registers including r15 are 32 bits all the time. Branches with immediate values are simply offsets not absolute addresses.
this is all clearly documented in the ARM architectural reference manuals infocenter.arm.com

ARM Thumb/Thumb-2 performance

I am working on an ARM Cortex-M3 controller which has the Thumb-2 instruction set.
Thumb mode is used to compress the instruction to a 16-bit size.
So size of code is reduced. But with normal Thumb mode, why is it said that performance is reduced?
In case of Thumb-2, it is said performance is improved as per these two links:
Wikipedia
Arm.com
Improve performance in cases where a single 16-bit instruction restricts functions available to the compiler.
A stated aim for Thumb-2 was to achieve code density similar to Thumb with performance similar to the ARM instruction set on 32-bit memory.
What exactly is this performance? Can someone give a few examples related to it?
When compared against the ARM 32 bit instruction set, the thumb 16 bit instruction set (not talking about thumb2 extensions yet) takes less space because the instructions are half the size, but there is a performance drop, in general, because it takes more instructions to do the same thing as on arm. There are less features to the instruction set, and most instructions only operate on registers r0-r7. Apples to Apples comparison more instructions to do the same thing is slower.
Now thumb2 extensions take formerly undefined thumb instructions and create 32 bit thumb instructions. Understand that there is more than one set of thumb2 extensions. ARMv6m adds a couple dozen perhaps. ARMv7m adds something like 150 instructions to the thumb instruction set, I dont know what ARMv8 or the future hold. So assuming ARMv7m, they have bridged the gap between what you can do in thumb and what you can do in ARM. So thumb2 is a reduced ARM instruction set as thumb is, but not as reduced. So it might still take more instructions to do the same thing in thumb2 (assume plus thumb) compared to ARM doing the same thing.
This gives a taste of the issue, a single instruction in arm and its equivalent in thumb.
ARM
and r8,r9,r10
THUMB
push {r0,r1}
mov r0,r8
mov r1,r9
and r0,r1
mov r1,r10
and r0,r1
mov r8,r0
pop {r0,r1}
Now a compiler wouldnt do that, the compiler would know it is targeting thumb and do things differently by choosing other registers. You still have fewer registers and fewer features per instruction:
mov r0,r1
and r0,r2
Still takes two instructions/execution cycles to and two registers together, without modifying the operands, and put the result in a third register. Thumb2 has a three register and so you are back to a single instruction using the thumb2 extensions. And that thumb2 instruction allows for r0-r15 on any of those three registers where thumb is limited to r0-r7.
Look at the ARMv5 Architectural Reference Manual, under each thumb instruction it shows you the equivalent ARM instruction. Then go to that ARM instruction and compare what you can do with that arm instruction that you cant do with the thumb instruction. It is a one way path the thumb instructions (not thumb2) have a one to one relationship with an ARM instruction. all thumb instructions have an equivalent arm instruction. but not all arm instructions have an equivalent thumb instruction. You should be able to see from this exercise the limitation on the compilers when using the thumb instruction set. Then get the ARMv7m Architectural Reference Manual and look at the instruction set, and compare the "all thumb variants" encodings (the ones that include ARMv4T) and the ones that are limited to ARMv6 and/or v7 and see the expansion of features between thumb and thumb2 as well as the thumb2 only instructions that have no thumb counterpart. This should clarify what the compilers have to work with between thumb and thumb2. You can then go so far as to compare thumb+thumb2 with the full blown ARM instructions (ARMv7 AR is that what it is called?). And see that thumb2 gets a lot closer to ARM, but you lose for example conditionals on every instruction, so conditional execution in thumb becomes comparisons with branching over code, where in ARM you can sometimes have an if-then-else without branching...
Thumb-2 introduced variable length instructions to the original Thumb; now instructions can be a mixture of 16-bit and 32-bit. That means you retain the size advantage of the original Thumb in everyday code, but now have access to almost the full ARM feature-set in more complex code, but without the ARM-interworking overhead previously incurred by Thumb.
Aside from the aforementioned access to the full register set from all register operations, Thumb-2 added back branchless conditional execution in the form of the IF-THEN (IT) block. The original Thumb removed the trademark ARM feature of conditional execution on nearly all instructions; this is now achieved in Thumb-2 by prepending the IT instruction with conditions for up to four succeeding instructions.
In addition, the instruction set itself has been vastly expanded; for example, the Cortex-M4F implements the DSP extension as well as the FPv4-SP floating point extension. In fact, I believe even NEON can be encoded in Thumb2.
ARM 32bit
ARM is a 32bit instruction set. All opcodes are 32bits. The leading bits denote conditional execution. This is generally wasteful as 90+% of code executes unconditionally. The ARM mode supports 16 registers nearly symmetric (with some special cases for PC, LR and SP).
Most instruction included an 's' suffix to set condition codes.
Thumb 16bit
The original thumb is 16bit only opcodes. It does not support conditional execution and access was mainly restricted to the lower eight registers. All arithmetic instructions set condition codes. Some instructions could retrieve data from the higher registers. It can be looked at as a compression engine on the instruction decode.
For some algorithms and memory topology, thumb can be faster than ARM. However it is fairly rare and needs slow (non-zero wait state) instruction memory for this to be the case.
As a practical example, some 'Game boy advance' code would be mainly execute in thumb mode, but would jump to zero wait state RAM and transition to ARM mode for a performance critical routine.
Thumb2 mixed mode
Thumb2 extended the thumb ISA but allows for both 16bit and 32bit opcodes. Almost the entire original ARM instruction set functionality can be achieved with Thumb2. Since the instruction stream is more dense, it is higher performance than the original ARM in almost every case due to lower instruction fetch overhead.
Thumb2 allows conditional execution for four instructions with 'if/else' opcode conditions. It allows use of all 16 registers and .unified code can be written to produce either ARM 32bit or mixed Thumb2 code.
Unified code will always be faster when Thumb2 is selected. There are fairly rare ARM sequences that can not be encoded directly to Thumb2. These few cases snippets could be faster. But generally, for any large code base, Thumb2 is faster.
This mode can be confusing with loop unrolling and jump tables. It is something that an x86 programmer would naturally think of. Ie, there are '.n'/narrow/16bit and '.w'/wide/32bit encodings of identical instructions. So if you treat code as an 'array' of tasks, the computations can be more complex. You also have transfer of control to mid-instruction possibilities.
As an example of 'un-encodeable' Thumb2 ARM code,
movlo r0,#1
moveq r0,#0
movhi r0,#-1
Above is only possible in ARM mode. However, such sequences are very rare and would only matter if you are porting assembler code from ARM to Thumb2. If it is selecting a compiler mode, Thumb2 should always produce better code (faster and smaller).
Summary
Each mode has variations on available opcodes depending on CPU model. However, the general concepts of each mode and performance are as stated.

Are all of the ARM opcodes 1 byte?

In x86 architecture there are one, two and three byte opcodes. What about ARM? For example, when I disassemble a binary, can I always take first byte as opcode?
No. There are now a few arm instruction sets. The primary arm instruction set, all instructions are 32 bits and the decoding of the instruction is not isolated to one field within the instruction. The thumb instruction set is based on 16 bit instructions, same answer though, depending on the instruction the decoding is in different places. Then there are thumb2 extensions to the thumb instruction set, same answer. The thumb2 instructions are one of the undefined instructions from thumb then add some more bits to distinguish which instruction.
Get one of the ARM Architecture Reference Manuals from infocenter.arm.com to see all of this. There are a couple of nice diagrams which show how this all works, the msbits are used to divide the instructions into different types then depending on the type the rest of the bits are decoded.
x86 comes from the 8 bit world where the designs tended to use one memory location(/access) (at the time that was a byte) for the more commonly used instructions, then have some opcodes multiplex into a second byte. Also immediates were added in their entirety. It was okay then, but is inefficient today. x86 should be the last instruction set you learn if ever, not representative of a good instruction set, you will need to unlearn a number of things to move forward.
wikipedia has a number of instruction sets on the wiki page itself, others it often has a direct link. IP vendors like arm and mips, as well as many chip vendors you can go right to their site and get the documentation.
ARM instructions don't really have a concept of 'opcodes'. I don't know the x86 instruction set very well, but what you describe sounds like 6502 machine code where some bytes identify which instruction to execute and others specify data that the instruction uses.
Ignoring Thumb, all ARM
instructions are four bytes long. The operands used by the instruction are contained within those four bytes.

Why does ARM have 16 registers?

Why does ARM have only 16 registers? Is that the ideal number?
Does distance of registers with more registers also increase the processing time/power ?
As the number of the general-purpose registers becomes smaller, you need to start using the stack for variables. Using the stack requires more instructions, so code size increases. Using the stack also increases the number of memory accesses, which hurts both performance and power usage. The trade off is that to represent more registers you need more bits in your instruction, and you need more room on the chip for the register file, which increases power requirements. You can see how differing register counts affects code size and the frequency of load/store instructions by compiling the same set of code with different numbers of registers. The result of that type of exercise can be seen in table 1 of this paper:
Extendable Instruction Set Computing
Register Program Load/Store
Count Size Frequency
27 100.00 27.90%
16 101.62 30.22%
8 114.76 44.45%
(They used 27 as a base because that is the number of GPRs available on a MIPS processor)
As you can see, there are only marginal improvements in both programs size and the number of load/stores required as you drop the register count down to 16. The real penalties don't kick in until you drop down to 8 registers. I suspect ARM designers felt that 16 registers was a kind of sweet spot when you were looking for the best performance per watt.
To choose one of 16 registers you would need 4bit therefore it could be that this is the best match for opcodes (machine commands) otherwise you would have to introduce a more complex instructions set, which would lead to bigger coder which implies additional costs (execution time).
Wikipedia says It has "Fixed instruction width of 32 bits to ease decoding and pipelining"
so it is a reasonable tradeoff.
32-bit ARM has 16 registers because it only use 4 bits for encoding the register, not because 16 is the ideal number. Likewise x86 has only 8 registers because in history they used 3 bits to encode the register so that some instructions fit in a byte.
That's such a limited number so both x86 and ARM when going to 64-bit doubled the number to 16 and 32 registers respectively. The old ARM instruction encoding has no remaining bit left enough for the larger register number so they must do a trade-off by dropping the ability to execute almost every instruction conditionally and use the 4-bit condition for the new features (that's an oversimplification, in reality it's not exactly like that because the encoding is new, but you do need 3 more bits for the new registers).
Back in the 80's (IIRC) an academic paper was published that examined a number of different workloads, comparing expected performance benefits of different numbers of registers. This was at a time when RISC processors were transitioning from academic ideas to mainstream hardware, and it was important to decide what was optimal. CPUs were already pulling ahead of memory in speed, and RISC was making this worse by limiting addressing modes and having separate load and store instructions. Having more registers meant you could "cache" more data for immediate access and therefore access main memory less.
Considering only powers of two, it was found that 32 registers was optimal, although 16 wasn't terribly far behind.
ARM is unique in that each of the registers can have a conditional execution code avoiding tests & branches. Don't forget, many 32 register machines fix R0 to 0 so conditional tests are done by comparing to R0. I know from experience. 20 years ago I had to program a 'Mode 7' (from SNES terminology) floor. The CPUs were SH2 for the 32x (or rather 2 of them), MIPS3000 (Playstation) and 3DO (ARM), the inner loop of the code were 19,15 & 11. If the 3DO had been running at the same speed as the other 2, it would have been twice as fast. As it was, it was just a bit slower.

Resources