Does the ARM ITE instruction have any use in this scenario - c

My C compiler (GCC) is producing this code which I don't think is optimal
8000500: 2b1c cmp r3, #28
8000502: bfd4 ite le
8000504: f841 3c70 strle.w r3, [r1, #-112]
8000508: f841 0c70 strgt.w r0, [r1, #-112]
It seems to me that the compiler could happily omit ITE LE instruction as the two stores following it use the LE and GT flags from the CMP instruction so only one will actuall be performed. The ITE instruction means that only one of the STRs will be tested and performed so the time should be equal, but it is using an extra word of instruction memory.
Any opinions on this ?

In Thumb mode, the instruction opcodes (other than branch instructions) don't have any space for conditional execution. In Thumb1, this meant that one simply had to use branches to skip instructions if necessary.
In Thumb2 mode, the IT instruction was added, which adds the conditional execution capability, without embedding it into the instruction opcodes themselves. In your case, the le condition part of the strle.w instruction is not embedded in the opcode f841 3c70, but is actually inferred from the preceding ite le instruction by the disassembler. If you use a hex editor to change the ite le instruction to something else, the strle.w and strgt.w will both suddenly disassemble into plain str.w.
See the other linked answer, https://stackoverflow.com/a/26001101, for more details.

The unified assembler syntax, which supports A32 and T32 targets, has added some confusion here. What is being shown in the disassembly is more verbose than what is encoded in the opcodes.
Your ITE instruction is very much a thumb instruction set placeholder, it defines an IT block which spans the following two instructions (and being thumb, those two instructions are not individually conditional). From a micro-architecture/timing point of view, it is only necessary to execute one instruction (but you shouldn't assume that this folding always takes place).
The strle/strgt syntax could be used on it's own for a T32 target, where the IT block is not necessary since the instruction set has a dedicated condition code field.
In order to write (or disassemble) code which can be used by both A32 and T32 assemblers, what you have here is both approaches to conditional execution written together. This has the advantage that the same assembly routine can be more portable (even if the resulting code is not identical - optimisations in the target cpu will also be different).
With T32, the combination of an it and a single 16 bit instruction matches the instruction density of the equivalent A32 instruction, if more than one conditional instruction can be combined, there is an overall win.

Related

Drawing a character in VGA memory with GNU C inline assembly

I´m learning to do some low level VGA programming in DOS with C and inline assembly. Right now I´m trying to create a function that prints out a character on screen.
This is my code:
//This is the characters BITMAPS
uint8_t characters[464] = {
0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x20,0x20,0x20,0x20,0x00,0x20,0x00,0x50,
0x50,0x00,0x00,0x00,0x00,0x00,0x50,0xf8,0x50,0x50,0xf8,0x50,0x00,0x20,0xf8,0xa0,
0xf8,0x28,0xf8,0x00,0xc8,0xd0,0x20,0x20,0x58,0x98,0x00,0x40,0xa0,0x40,0xa8,0x90,
0x68,0x00,0x20,0x40,0x00,0x00,0x00,0x00,0x00,0x20,0x40,0x40,0x40,0x40,0x20,0x00,
0x20,0x10,0x10,0x10,0x10,0x20,0x00,0x50,0x20,0xf8,0x20,0x50,0x00,0x00,0x20,0x20,
0xf8,0x20,0x20,0x00,0x00,0x00,0x00,0x00,0x60,0x20,0x40,0x00,0x00,0x00,0xf8,0x00,
0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x60,0x60,0x00,0x00,0x08,0x10,0x20,0x40,0x80,
0x00,0x70,0x88,0x98,0xa8,0xc8,0x70,0x00,0x20,0x60,0x20,0x20,0x20,0x70,0x00,0x70,
0x88,0x08,0x70,0x80,0xf8,0x00,0xf8,0x10,0x30,0x08,0x88,0x70,0x00,0x20,0x40,0x90,
0x90,0xf8,0x10,0x00,0xf8,0x80,0xf0,0x08,0x88,0x70,0x00,0x70,0x80,0xf0,0x88,0x88,
0x70,0x00,0xf8,0x08,0x10,0x20,0x20,0x20,0x00,0x70,0x88,0x70,0x88,0x88,0x70,0x00,
0x70,0x88,0x88,0x78,0x08,0x70,0x00,0x30,0x30,0x00,0x00,0x30,0x30,0x00,0x30,0x30,
0x00,0x30,0x10,0x20,0x00,0x00,0x10,0x20,0x40,0x20,0x10,0x00,0x00,0xf8,0x00,0xf8,
0x00,0x00,0x00,0x00,0x20,0x10,0x08,0x10,0x20,0x00,0x70,0x88,0x10,0x20,0x00,0x20,
0x00,0x70,0x90,0xa8,0xb8,0x80,0x70,0x00,0x70,0x88,0x88,0xf8,0x88,0x88,0x00,0xf0,
0x88,0xf0,0x88,0x88,0xf0,0x00,0x70,0x88,0x80,0x80,0x88,0x70,0x00,0xe0,0x90,0x88,
0x88,0x90,0xe0,0x00,0xf8,0x80,0xf0,0x80,0x80,0xf8,0x00,0xf8,0x80,0xf0,0x80,0x80,
0x80,0x00,0x70,0x88,0x80,0x98,0x88,0x70,0x00,0x88,0x88,0xf8,0x88,0x88,0x88,0x00,
0x70,0x20,0x20,0x20,0x20,0x70,0x00,0x10,0x10,0x10,0x10,0x90,0x60,0x00,0x90,0xa0,
0xc0,0xa0,0x90,0x88,0x00,0x80,0x80,0x80,0x80,0x80,0xf8,0x00,0x88,0xd8,0xa8,0x88,
0x88,0x88,0x00,0x88,0xc8,0xa8,0x98,0x88,0x88,0x00,0x70,0x88,0x88,0x88,0x88,0x70,
0x00,0xf0,0x88,0x88,0xf0,0x80,0x80,0x00,0x70,0x88,0x88,0xa8,0x98,0x70,0x00,0xf0,
0x88,0x88,0xf0,0x90,0x88,0x00,0x70,0x80,0x70,0x08,0x88,0x70,0x00,0xf8,0x20,0x20,
0x20,0x20,0x20,0x00,0x88,0x88,0x88,0x88,0x88,0x70,0x00,0x88,0x88,0x88,0x88,0x50,
0x20,0x00,0x88,0x88,0x88,0xa8,0xa8,0x50,0x00,0x88,0x50,0x20,0x20,0x50,0x88,0x00,
0x88,0x50,0x20,0x20,0x20,0x20,0x00,0xf8,0x10,0x20,0x40,0x80,0xf8,0x00,0x60,0x40,
0x40,0x40,0x40,0x60,0x00,0x00,0x80,0x40,0x20,0x10,0x08,0x00,0x30,0x10,0x10,0x10,
0x10,0x30,0x00,0x20,0x50,0x88,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0xf8,
0x00,0xf8,0xf8,0xf8,0xf8,0xf8,0xf8};
/**************************************************************************
* put_char *
* Print char *
**************************************************************************/
void put_char(int x ,int y,int ascii_char ,byte color){
__asm__(
"push %si\n\t"
"push %di\n\t"
"push %cx\n\t"
"mov color,%dl\n\t" //test color
"mov ascii_char,%al\n\t" //test char
"sub $32,%al\n\t"
"mov $7,%ah\n\t"
"mul %ah\n\t"
"lea $characters,%si\n\t"
"add %ax,%si\n\t"
"mov $7,%cl\n\t"
"0:\n\t"
"segCS %lodsb\n\t"
"mov $6,%ch\n\t"
"1:\n\t"
"shl $1,%al\n\t"
"jnc 2f\n\t"
"mov %dl,%ES:(%di)\n\t"
"2:\n\t"
"inc %di\n\t"
"dec %ch\n\t"
"jnz 1b\n\t"
"add $320-6,%di\n\t"
"dec %cl\n\t"
"jnz 0b\n\t"
"pop %cx\n\t"
"pop %di\n\t"
"pop %si\n\t"
"retn"
);
}
I´m guiding myself from this series of tutorials written in PASCAL: http://www.joco.homeserver.hu/vgalessons/lesson8.html .
I changed the assembly syntax according to the gcc compiler, but I´m still getting this errors:
Operand mismatch type for 'lea'
No such instruction 'segcs lodsb'
No such instruction 'retn'
EDIT:
I have been working on improving my code and at least now I see something on the screen. Here´s my updated code:
/**************************************************************************
* put_char *
* Print char *
**************************************************************************/
void put_char(int x,int y){
int char_offset;
int l,i,j,h,offset;
j,h,l,i=0;
offset = (y<<8) + (y<<6) + x;
__asm__(
"movl _VGA, %%ebx;" // VGA memory pointer
"addl %%ebx,%%edi;" //%di points to screen
"mov _ascii_char,%%al;"
"sub $32,%%al;"
"mov $7,%%ah;"
"mul %%ah;"
"lea _characters,%%si;"
"add %%ax,%%si;" //SI point to bitmap
"mov $7,%%cl;"
"0:;"
"lodsb %%cs:(%%si);" //load next byte of bitmap
"mov $6,%%ch;"
"1:;"
"shl $1,%%al;"
"jnc 2f;"
"movb %%dl,(%%edi);" //plot the pixel
"2:\n\t"
"incl %%edi;"
"dec %%ch;"
"jnz 1b;"
"addl $320-6,%%edi;"
"dec %%cl;"
"jnz 0b;"
: "=D" (offset)
: "d" (current_color)
);
}
If you see the image above I was trying to write the letter "S". The results are the green pixels that you see on the upper left side of the screen. No matter what x and y I give the functon it always plots the pixels on that same spot.
Can anyone help me correct my code?
See below for an analysis of some things that are specifically wrong with your put_char function, and a version that might work. (I'm not sure about the %cs segment override, but other than that it should do what you intend).
Learning DOS and 16-bit asm isn't the best way to learn asm
First of all, DOS and 16-bit x86 are thoroughly obsolete, and are not easier to learn than normal 64-bit x86. Even 32-bit x86 is obsolete, but still in wide use in the Windows world.
32-bit and 64-bit code don't have to care about a lot of 16-bit limitations / complications like segments or limited register choice in addressing modes. Some modern systems do use segment overrides for thread-local storage, but learning how to use segments in 16-bit code is barely connected to that.
One of the major benefits to knowing asm is for debugging / profiling / optimizing real programs. If you want to understand how to write C or other high-level code that can (and actually does) compile to efficient asm, you'll probably be looking at compiler output. This will be 64-bit (or 32-bit). (e.g. see Matt Godbolt's CppCon2017 talk: “What Has My Compiler Done for Me Lately? Unbolting the Compiler's Lid” which has an excellent intro to reading x86 asm for total beginners, and to looking at compiler output).
Asm knowledge is useful when looking at performance-counter results annotating a disassembly of your binary (perf stat ./a.out && perf report -Mintel: see Chandler Carruth's CppCon2015 talk: "Tuning C++: Benchmarks, and CPUs, and Compilers! Oh My!"). Aggressive compiler optimizations mean that looking at cycle / cache-miss / stall counts per source line are much less informative than per instruction.
Also, for your program to actually do anything, it has to either talk to hardware directly, or make system calls. Learning DOS system calls for file access and user input is a complete waste of time (except for answering the steady stream of SO questions about how to read and print multi-digit numbers in 16-bit code). They're quite different from the APIs in the current major OSes. Developing new DOS applications is not useful, so you'd have to learn another API (as well as ABI) when you get to the stage of doing something with your asm knowledge.
Learning asm on an 8086 simulator is even more limiting: 186, 286, and 386 added many convenient instructions like imul ecx, 15, making ax less "special". Limiting yourself to only instructions that work on 8086 means you'll figure out "bad" ways to do things. Other big ones are movzx / movsx, shift by an immediate count (other than 1), and push immediate. Besides performance, it's also easier to write code when these are available, because you don't have to write a loop to shift by more than 1 bit.
Suggestions for better ways to teach yourself asm
I mostly learned asm from reading compiler output, then making small changes. I didn't try to write stuff in asm when I didn't really understand things, but if you're going to learn quickly (rather than just evolve an understanding while debugging / profiling C), you probably need to test your understanding by writing your own code. You do need to understand the basics, that there are 8 or 16 integer registers + the flags and instruction pointer, and that every instruction makes a well-defined modification to the current architectural state of the machine. (See the Intel insn ref manual for complete descriptions of every instruction (links in the x86 wiki, along with much more good stuff).
You might want to start with simple things like writing a single function in asm, as part of a bigger program. Understanding the kind of asm needed to make system calls is useful, but in real programs it's normally only useful to hand-write asm for inner loops that don't involve any system calls. It's time-consuming to write asm to read input and print results, so I'd suggest doing that part in C. Make sure you read the compiler output and understand what's going on, and the difference between an integer and a string, and what strtol and printf do, even if you don't write them yourself.
Once you think you understand enough of the basics, find a function in some program you're familiar with and/or interested in, and see if you can beat the compiler and save instructions (or use faster instructions). Or implement it yourself without using the compiler output as a starting point, whichever you find more interesting. This answer might be interesting, although the focus there was finding C source that got the compiler to produce the optimal ASM.
How to try to solve your own problems (before asking an SO question)
There are many SO questions from people asking "how do I do X in asm", and the answer is usually "the same as you would in C". Don't get so caught up in asm being unfamiliar that you forget how to program. Figure out what needs to happen to the data the function operates on, then figure out how to do that in asm. If you get stuck and have to ask a question, you should have most of a working implementation, with just one part that you don't know what instructions to use for one step.
You should do this with 32 or 64bit x86. I'd suggest 64bit, since the ABI is nicer, but 32bit functions will force you to make more use of the stack. So that might help you understand how a call instruction puts the return address on the stack, and where the args the caller pushed actually are after that. (This appears to be what you tried to avoid dealing with by using inline asm).
Programming hardware directly is neat, but not a generally useful skill
Learning how to do graphics by directly modifying video RAM is not useful, other than to satisfy curiosity about how computers used to work. You can't use that knowledge for anything. Modern graphics APIs exist to let multiple programs draw in their own regions of the screen, and to allow indirection (e.g. draw on a texture instead of the screen directly, so 3D window-flipping alt-tab can look fancy). There too many reasons to list here for not drawing directly on video RAM.
Drawing on a pixmap buffer and then using a graphics API to copy it to the screen is possible. Still, doing bitmap graphics at all is more or less obsolete, unless you're generating images for PNG or JPEG or something (e.g. optimize converting histogram bins to a scatter plot in the back-end code for a web service). Modern graphics APIs abstract away the resolution, so your app can draw things at a reasonable size regardless of how big each pixel is. (small but extremely high rez screen vs. big TV at low rez).
It is kind of cool to write to memory and see something change on-screen. Or even better, hook up LEDs (with small resistors) to the data bits on a parallel port, and run an outb instruction to turn them on/off. I did this on my Linux system ages ago. I made a little wrapper program that used iopl(2) and inline asm, and ran it as root. You can probably do similar on Windows. You don't need DOS or 16bit code to get your feet wet talking to the hardware.
in/out instructions, and normal loads/stores to memory-mapped IO, and DMA, are how real drivers talk to hardware, including things far more complicated than parallel ports. It's fun to know how your hardware "really" works, but only spend time on it if you're actually interested, or want to write drivers. The Linux source tree includes drivers for boatloads of hardware, and is often well commented, so if you like reading code as much as writing code, that's another way to get a feel for what read drivers do when they talk to hardware.
It's generally good to have some idea how things work under the hood. If you want to learn about how graphics used to work ages ago (with VGA text mode and color / attribute bytes), then sure, go nuts. Just be aware that modern OSes don't use VGA text mode, so you aren't even learning what happens under the hood on modern computers.
Many people enjoy https://retrocomputing.stackexchange.com/, reliving a simpler time when computers were less complex and couldn't support as many layers of abstraction. Just be aware that's what you're doing. I might be a good stepping stone to learning to write drivers for modern hardware, if you're sure that's why you want to understand asm / hardware.
Inline asm
You are taking a totally incorrect approach to using inline ASM. You seem to want to write whole functions in asm, so you should just do that. e.g. put your code in asmfuncs.S or something. Use .S if you want to keep using GNU / AT&T syntax; or use .asm if you want to use Intel / NASM / YASM syntax (which I would recommend, since the official manuals all use Intel syntax. See the x86 wiki for guides and manuals.)
GNU inline asm is the hardest way to learn ASM. You have to understand everything that your asm does, and what the compiler needs to know about it. It's really hard to get everything right. For example, in your edit, that block of inline asm modifies many registers that you don't list as clobbered, including %ebx which is a call-preserved register (so this is broken even if that function isn't inlined). At least you took out the ret, so things won't break as spectacularly when the compiler inlines this function into the loop that calls it. If that sounds really complicated, that's because it is, and part of why you shouldn't use inline asm to learn asm.
This answer to a similar question from misusing inline asm while trying to learn asm in the first place has more links about inline asm and how to use it well.
Getting this mess working, maybe
This part could be a separate answer, but I'll leave it together.
Besides your whole approach being fundamentally a bad idea, there is at least one specific problem with your put_char function: you use offset as an output-only operand. gcc quite happily compiles your whole function to a single ret instruction, because the asm statement isn't volatile, and its output isn't used. (Inline asm statements without outputs are assumed to be volatile.)
I put your function on godbolt, so I could look at what assembly the compiler generates surrounding it. That link is to the fixed maybe-working version, with correctly-declared clobbers, comments, cleanups, and optimizations. See below for the same code, if that external link ever breaks.
I used gcc 5.3 with the -m16 option, which is different from using a real 16bit compiler. It still does everything the 32bit way (using 32bit addresses, 32bit ints, and 32bit function args on the stack), but tells the assembler that the CPU will be in 16bit mode, so it will know when to emit operand-size and address-size prefixes.
Even if you compile your original version with -O0, the compiler computes offset = (y<<8) + (y<<6) + x;, but doesn't put it in %edi, because you didn't ask it to. Specifying it as another input operand would have worked. After the inline asm, it stores %edi into -12(%ebp), where offset lives.
Other stuff wrong with put_char:
You pass 2 things (ascii_char and current_color) into your function through globals, instead of function arguments. Yuck, that's disgusting. VGA and characters are constants, so loading them from globals doesn't look so bad. Writing in asm means you should ignore good coding practices only when it helps performance by a reasonable amount. Since the caller probably had to store those values into the globals, you're not saving anything compared to the caller storing them on the stack as function args. And for x86-64, you'd be losing perf because the caller could just pass them in registers.
Also:
j,h,l,i=0; // sets i=0, does nothing to j, h, or l.
// gcc warns: left-hand operand of comma expression has no effect
j;h;l;i=0; // equivalent to this
j=h=l=i=0; // This is probably what you meant
All the local variables are unused anyway, other than offset. Were you going to write it in C or something?
You use 16bit addresses for characters, but 32bit addressing modes for VGA memory. I assume this is intentional, but I have no idea if it's correct. Also, are you sure you should use a CS: override for the loads from characters? Does the .rodata section go into the code segment? Although you didn't declare uint8_t characters[464] as const, so it's probably just in the .data section anyway. I consider myself fortunate that I haven't actually written code for a segmented memory model, but that still looks suspicious.
If you're really using djgpp, then according to Michael Petch's comment, your code will run in 32bit mode. Using 16bit addresses is thus a bad idea.
Optimizations
You can avoid using %ebx entirely by doing this, instead of loading into ebx and then adding %ebx to %edi.
"add _VGA, %%edi\n\t" // load from _VGA, add to edi.
You don't need lea to get an address into a register. You can just use
"mov %%ax, %%si\n\t"
"add $_characters, %%si\n\t"
$_characters means the address as an immediate constant. We can save a lot of instructions by combining this with the previous calculation of the offset into the characters array of bitmaps. The immediate-operand form of imul lets us produce the result in %si in the first place:
"movzbw _ascii_char,%%si\n\t"
//"sub $32,%%ax\n\t" // AX = ascii_char - 32
"imul $7, %%si, %%si\n\t"
"add $(_characters - 32*7), %%si\n\t" // Do the -32 at the same time as adding the table address, after multiplying
// SI points to characters[(ascii_char-32)*7]
// i.e. the start of the bitmap for the current ascii character.
Since this form of imul only keeps the low 16b of the 16*16 -> 32b multiply, the 2 and 3 operand forms imul can be used for signed or unsigned multiplies, which is why only imul (not mul) has those extra forms. For larger operand-size multiplies, 2 and 3 operand imul is faster, because it doesn't have to store the high half in %[er]dx.
You could simplify the inner loop a bit, but it would complicate the outer loop slightly: you could branch on the zero flag, as set by shl $1, %al, instead of using a counter. That would make it also unpredictable, like the jump over store for non-foreground pixels, so the increased branch mispredictions might be worse than the extra do-nothing loops. It would also mean you'd need to recalculate %edi in the outer loop each time, because the inner loop wouldn't run a constant number of times. But it could look like:
... same first part of the loop as before
// re-initialize %edi to first_pixel-1, based on outer-loop counter
"lea -1(%%edi), %%ebx\n"
".Lbit_loop:\n\t" // map the 1bpp bitmap to 8bpp VGA memory
"incl %%ebx\n\t" // inc before shift, to preserve flags
"shl $1,%%al\n\t"
"jnc .Lskip_store\n\t" // transparency: only store on foreground pixels
"movb %%dl,(%%ebx)\n" //plot the pixel
".Lskip_store:\n\t"
"jnz .Lbit_loop\n\t" // flags still set from shl
"addl $320,%%edi\n\t" // WITHOUT the -6
"dec %%cl\n\t"
"jnz .Lbyte_loop\n\t"
Note that the bits in your character bitmaps are going to map to bytes in VGA memory like {7 6 5 4 3 2 1 0}, because you're testing the bit shifted out by a left shift. So it starts with the MSB. Bits in a register are always "big endian". A left shift multiplies by two, even on a little-endian machine like x86. Little-endian only affects ordering of bytes in memory, not bits in a byte, and not even bytes inside registers.
A version of your function that might do what you intended.
This is the same as the godbolt link.
void put_char(int x,int y){
int offset = (y<<8) + (y<<6) + x;
__asm__ volatile ( // volatile is implicit for asm statements with no outputs, but better safe than sorry.
"add _VGA, %%edi\n\t" // edi points to VGA + offset.
"movzbw _ascii_char,%%si\n\t" // Better: use an input operand
//"sub $32,%%ax\n\t" // AX = ascii_char - 32
"imul $7, %%si, %%si\n\t" // can't fold the load into this because it's not zero-padded
"add $(_characters - 32*7), %%si\n\t" // Do the -32 at the same time as adding the table address, after multiplying
// SI points to characters[(ascii_char-32)*7]
// i.e. the start of the bitmap for the current ascii character.
"mov $7,%%cl\n"
".Lbyte_loop:\n\t"
"lodsb %%cs:(%%si)\n\t" //load next byte of bitmap
"mov $6,%%ch\n"
".Lbit_loop:\n\t" // map the 1bpp bitmap to 8bpp VGA memory
"shl $1,%%al\n\t"
"jnc .Lskip_store\n\t" // transparency: only store on foreground pixels
"movb %%dl,(%%edi)\n" //plot the pixel
".Lskip_store:\n\t"
"incl %%edi\n\t"
"dec %%ch\n\t"
"jnz .Lbit_loop\n\t"
"addl $320-6,%%edi\n\t"
"dec %%cl\n\t"
"jnz .Lbyte_loop\n\t"
: "+&D" (offset) // EDI modified by the asm, compiler needs to know that, so use a read-write "+" input. Early-clobber "&" because we read the other input after modifying this.
: "d" (current_color) // used read-only
: "%eax", "%ecx", "%esi", "memory"
// omit the memory clobber if your C never touches VGA memory, and your asm never loads/stores anywhere else.
// but that's not the case here: the asm loads from memory written by C
// without listing it as a memory operand (even a pointer in a register isn't sufficient)
// so gcc might optimize away "dead" stores to it, or reorder the asm with loads/stores to it.
);
}
Re: the "memory" clobber, see How can I indicate that the memory *pointed* to by an inline ASM argument may be used?
I didn't use dummy output operands to leave register allocation up to the compiler's discretion, but that's a good idea to reduce the overhead of getting data in the right places for inline asm. (extra mov instructions). For example, here there was no need to force the compiler to put offset in %edi. It could have been any register we aren't already using.

ARM Instruction Set - Changing the CPSR (S bit)

I was wondering why does not ARM Instructions set the CPSR by default (like x86), but the S bit must be used in these cases? When Instructions dont change the CPSR offer better performance? For example an ADD instruction offers better performance than ADDS? Or what is the real deal?
It is for performance or perhaps was. if you always change flags then you have a hard time using one flag on multiple instructions without a branch which messes with your pipeline.
if(a==0)
{
b=b+1;
c=0;
}
else
{
b=0;
c=c+1;
}
traditionally you have to literally implement that with branches (pseudocode not real asm)
cmp a,0
bne notzero
add b,b,1
mov c,0
b waszero
notzero:
mov b,0
add c,c,1
waszero:
so you suffer a branch no matter what
but with conditional execution
cmp a,0
addeq b,b,1
moveq c,0
addne c,c,1
movne b,0
no branches you simply rip through the code, now the only way this can work is 1) you have an option per instruction to conditionally execute based on flags and 2) instructions that modify the flags have an option not to modify the flags
Depending on the processor family/architecture the add and maybe even mov will modify the flags, so you have to have both the conditional execution AND the option not to set flags. That is why arm has an adds and an add.
I think they got rid of all that with the 64 bit architecture so perhaps as interesting and cool as it was maybe it wasnt used enough or worth it or they just needed those four bits to keep all/some instructions to 32 bits.
I was wondering why does not ARM Instructions set the CPSR by default (like x86), but the S bit must be used in these cases?
It is a choice and it depends on context. The extra flexibility is only limited by a programmers imagination.
When Instructions don't change the CPSR offer better performance? For example an ADD instruction offers better performance than ADDS?
Most likely neverNote1. Ie, an instruction that doesn't set CPSR does not execute faster (less clocks) for the majority of ARM CPUs and instructions.
Or what is the real deal?
Consider some 'C' code,
int i, sum;
char *p = array; /* passed in */
for(i = 0, sum = 0; i < 10 ; i++)
sum += arrary[i];
return sum;
This can translate to,
mov r2, r0 ; get "array" to R2
mov r1, #10 ; counter (reverse direction)
mov r0, #0 ; sum = 0
1:
subs r1, #1 ; set conditions
add r0, [r2], #1 ; does not affect conditions.
bne 1b
bx lr
In this case, the loop body is simple. However, if there are no conditionals with-in the loop, then a compiler (or assembler programmer) may schedule the loop decrement where ever they like and still set the conditions to be tested much later. This can be more important with more complex logic and where the CPU may have stalls due to data dependencies. It can also be important with conditional execution.
The optional 'S' is more a feature of many instructions than a single instruction.
Note1: Some one can always make an ARM CPU and do this. You would have to look at data sheets. I don't know of any CPU that take more time to set conditions.

what is the use of MSRNE instruction in ARM

Not able to find any documentation on this instruction
Is this a macro or an instruction. It is used mainly in context switch but not able to undetstand its purpose
This is an MSR instruction, conditionally executed as Not Equal (NE).
MSR is used to move a value from a general purpose register to a system co-processor register. This can be used for all manner of things, as the system co-processor allows. It is often used for things such as cache invalidation/flushing.
The NE part makes the instruction dependant on the Zero status flag being set to zero, this occurs as the result of a previous flag-setting operation.

ARM Thumb/Thumb-2 performance

I am working on an ARM Cortex-M3 controller which has the Thumb-2 instruction set.
Thumb mode is used to compress the instruction to a 16-bit size.
So size of code is reduced. But with normal Thumb mode, why is it said that performance is reduced?
In case of Thumb-2, it is said performance is improved as per these two links:
Wikipedia
Arm.com
Improve performance in cases where a single 16-bit instruction restricts functions available to the compiler.
A stated aim for Thumb-2 was to achieve code density similar to Thumb with performance similar to the ARM instruction set on 32-bit memory.
What exactly is this performance? Can someone give a few examples related to it?
When compared against the ARM 32 bit instruction set, the thumb 16 bit instruction set (not talking about thumb2 extensions yet) takes less space because the instructions are half the size, but there is a performance drop, in general, because it takes more instructions to do the same thing as on arm. There are less features to the instruction set, and most instructions only operate on registers r0-r7. Apples to Apples comparison more instructions to do the same thing is slower.
Now thumb2 extensions take formerly undefined thumb instructions and create 32 bit thumb instructions. Understand that there is more than one set of thumb2 extensions. ARMv6m adds a couple dozen perhaps. ARMv7m adds something like 150 instructions to the thumb instruction set, I dont know what ARMv8 or the future hold. So assuming ARMv7m, they have bridged the gap between what you can do in thumb and what you can do in ARM. So thumb2 is a reduced ARM instruction set as thumb is, but not as reduced. So it might still take more instructions to do the same thing in thumb2 (assume plus thumb) compared to ARM doing the same thing.
This gives a taste of the issue, a single instruction in arm and its equivalent in thumb.
ARM
and r8,r9,r10
THUMB
push {r0,r1}
mov r0,r8
mov r1,r9
and r0,r1
mov r1,r10
and r0,r1
mov r8,r0
pop {r0,r1}
Now a compiler wouldnt do that, the compiler would know it is targeting thumb and do things differently by choosing other registers. You still have fewer registers and fewer features per instruction:
mov r0,r1
and r0,r2
Still takes two instructions/execution cycles to and two registers together, without modifying the operands, and put the result in a third register. Thumb2 has a three register and so you are back to a single instruction using the thumb2 extensions. And that thumb2 instruction allows for r0-r15 on any of those three registers where thumb is limited to r0-r7.
Look at the ARMv5 Architectural Reference Manual, under each thumb instruction it shows you the equivalent ARM instruction. Then go to that ARM instruction and compare what you can do with that arm instruction that you cant do with the thumb instruction. It is a one way path the thumb instructions (not thumb2) have a one to one relationship with an ARM instruction. all thumb instructions have an equivalent arm instruction. but not all arm instructions have an equivalent thumb instruction. You should be able to see from this exercise the limitation on the compilers when using the thumb instruction set. Then get the ARMv7m Architectural Reference Manual and look at the instruction set, and compare the "all thumb variants" encodings (the ones that include ARMv4T) and the ones that are limited to ARMv6 and/or v7 and see the expansion of features between thumb and thumb2 as well as the thumb2 only instructions that have no thumb counterpart. This should clarify what the compilers have to work with between thumb and thumb2. You can then go so far as to compare thumb+thumb2 with the full blown ARM instructions (ARMv7 AR is that what it is called?). And see that thumb2 gets a lot closer to ARM, but you lose for example conditionals on every instruction, so conditional execution in thumb becomes comparisons with branching over code, where in ARM you can sometimes have an if-then-else without branching...
Thumb-2 introduced variable length instructions to the original Thumb; now instructions can be a mixture of 16-bit and 32-bit. That means you retain the size advantage of the original Thumb in everyday code, but now have access to almost the full ARM feature-set in more complex code, but without the ARM-interworking overhead previously incurred by Thumb.
Aside from the aforementioned access to the full register set from all register operations, Thumb-2 added back branchless conditional execution in the form of the IF-THEN (IT) block. The original Thumb removed the trademark ARM feature of conditional execution on nearly all instructions; this is now achieved in Thumb-2 by prepending the IT instruction with conditions for up to four succeeding instructions.
In addition, the instruction set itself has been vastly expanded; for example, the Cortex-M4F implements the DSP extension as well as the FPv4-SP floating point extension. In fact, I believe even NEON can be encoded in Thumb2.
ARM 32bit
ARM is a 32bit instruction set. All opcodes are 32bits. The leading bits denote conditional execution. This is generally wasteful as 90+% of code executes unconditionally. The ARM mode supports 16 registers nearly symmetric (with some special cases for PC, LR and SP).
Most instruction included an 's' suffix to set condition codes.
Thumb 16bit
The original thumb is 16bit only opcodes. It does not support conditional execution and access was mainly restricted to the lower eight registers. All arithmetic instructions set condition codes. Some instructions could retrieve data from the higher registers. It can be looked at as a compression engine on the instruction decode.
For some algorithms and memory topology, thumb can be faster than ARM. However it is fairly rare and needs slow (non-zero wait state) instruction memory for this to be the case.
As a practical example, some 'Game boy advance' code would be mainly execute in thumb mode, but would jump to zero wait state RAM and transition to ARM mode for a performance critical routine.
Thumb2 mixed mode
Thumb2 extended the thumb ISA but allows for both 16bit and 32bit opcodes. Almost the entire original ARM instruction set functionality can be achieved with Thumb2. Since the instruction stream is more dense, it is higher performance than the original ARM in almost every case due to lower instruction fetch overhead.
Thumb2 allows conditional execution for four instructions with 'if/else' opcode conditions. It allows use of all 16 registers and .unified code can be written to produce either ARM 32bit or mixed Thumb2 code.
Unified code will always be faster when Thumb2 is selected. There are fairly rare ARM sequences that can not be encoded directly to Thumb2. These few cases snippets could be faster. But generally, for any large code base, Thumb2 is faster.
This mode can be confusing with loop unrolling and jump tables. It is something that an x86 programmer would naturally think of. Ie, there are '.n'/narrow/16bit and '.w'/wide/32bit encodings of identical instructions. So if you treat code as an 'array' of tasks, the computations can be more complex. You also have transfer of control to mid-instruction possibilities.
As an example of 'un-encodeable' Thumb2 ARM code,
movlo r0,#1
moveq r0,#0
movhi r0,#-1
Above is only possible in ARM mode. However, such sequences are very rare and would only matter if you are porting assembler code from ARM to Thumb2. If it is selecting a compiler mode, Thumb2 should always produce better code (faster and smaller).
Summary
Each mode has variations on available opcodes depending on CPU model. However, the general concepts of each mode and performance are as stated.

What is the ARM Thumb Instruction set?

under "The Thumb instruction set" in section 1-34 of "ARM11TechnicalRefManual" it said that:
"The Thumb instruction set is a subset of the most commonly used 32-bit ARM instructions.Thumb instructions are 16 bits long,and have a corresponding 32-bit ARM instruction that has the same effect on processor model."
can any one explain more about this especially second sentence and say how does processor perform it?
The ARM processor has 2 instruction sets, the traditional ARM set, where the instructions are all 32-bit long, and the more condensed Thumb set, where most common instructions are 16-bit long (and some are 32-bit long). Which instruction set to run can be chosen by the developer, and only one set can be active (i.e. once the processor is switched to Thumb mode, all instructions will be decoded as using the Thumb instead of ARM).
Although they are different instruction sets, they share similar functionality, and can be represented using the same assembly language. For example, the instruction
ADDS R0, R1, R2
can be compiled to ARM (E0910002 / 11100000 10010001 00000000 00000010) or Thumb (1888 / 00011000 10001000). Of course, they perform the same function (add r1 and r2 and store the result to r0), even if they have different encodings. This is the meaning of Thumb instructions are 16 bits long,and have a corresponding 32-bit ARM instruction that has the same effect on processor model.
Every* instruction in Thumb encoding also has a corresponding encoding in ARM, which is meant by the "subset" sentence.
*: Not strictly true, there is not "IT" instruction in ARM, although ARM doesn't need "IT" anyway (it will be ignored by the assembler).

Resources