Dive into the assembly - c

Function in c:
PHPAPI char *php_pcre_replace(char *regex, int regex_len,
char *subject, int subject_len,
zval *replace_val, int is_callable_replace,
int *result_len, int limit, int *replace_count TSRMLS_DC)
{
pcre_cache_entry *pce; /* Compiled regular expression */
/* Compile regex or get it from cache. */
if ((pce = pcre_get_compiled_regex_cache(regex, regex_len TSRMLS_CC)) == NULL) {
return NULL;
}
....
}
Its assembly:
php5ts!php_pcre_replace:
1015db70 8b442408 mov eax,dword ptr [esp+8]
1015db74 8b4c2404 mov ecx,dword ptr [esp+4]
1015db78 56 push esi
1015db79 8b74242c mov esi,dword ptr [esp+2Ch]
1015db7d 56 push esi
1015db7e 50 push eax
1015db7f 51 push ecx
1015db80 e8cbeaffff call php5ts!pcre_get_compiled_regex_cache (1015c650)
1015db85 83c40c add esp,0Ch
1015db88 85c0 test eax,eax
1015db8a 7502 jne php5ts!php_pcre_replace+0x1e (1015db8e)
php5ts!php_pcre_replace+0x1c:
1015db8c 5e pop esi
1015db8d c3 ret
The c function call pcre_get_compiled_regex_cache(regex, regex_len TSRMLS_CC) corresponds to 1015db7d~1015db80 which pushes the 3 parameters to the stack and call it.
But my doubt is,among so many registers,how does the compiler decide to use eax,ecx and esi(this is special,as it's restored before using,why?) as the intermediate to carry to the stack?
There must be some hidden indication in c that tells the compiler to do it this way,right?

No, there is no hidden indication.
This is a typical strategy for generating 80x86 instructions used by many compiler implementations, C and otherwise. For example, the 1980s Intel Fortran-77 compiler, when optimization was turned on, did the same thing.
That is uses eax and ecx preferentially is probably an artifact of avoiding use of esi and edi since those registers cannot directly be used to load byte operands.
Why not ebx and edx? Well, those are preferred by many code generators for holding intermediate pointers in evaluating complex structure evaluation, which is to say, there isn't much reason at all. The compiler just looked for two available registers to use and overwrote them to buffer the values.
Why not reuse eax like this?:
push esi
mov eax,dword ptr [esp+2Ch]
push eax
mov eax,dword ptr [esp+8]
push eax
mov eax,dword ptr [esp+4]
push eax
Because that causes pipeline stalls waiting for eax to complete previous memory cycles, in 80x86s since the 80586 (maybe 80486—it's too long ago to be sure off the top of my head).
The x86 architecture is a strange beast. Each register, though promoted as being "general purpose" by Intel, has its quirks (cx/ecx is tied to the loop instruction for example, and eax:edx is tied to the multiply instruction). That combined with the peculiar ways to optimize execution to avoid cache misses and pipeline stalls often leads to inscrutable generated code by a code generator which factors all that in.

Related

Why the first actual parameter printing as a output in C

#include <stdio.h>
int add(int a, int b)
{
if (a > b)
return a * b;
}
int main(void)
{
printf("%d", add(3, 7));
return 0;
}
Output:
3
In the above code, I am calling the function inside the print. In the function, the if condition is not true, so it won't execute. Then why I am getting 3 as output? I tried changing the first parameter to some other value, but it's printing the same when the if condition is not satisfied.
What happens here is called undefined behaviour.
When (a <= b), you don't return any value (and your compiler probably told you so). But if you use the return value of the function anyway, even if the function doesn't return anything, that value is garbage. In your case it is 3, but with another compiler or with other compiler flags it could be something else.
If your compiler didn't warn you, add the corresponding compiler flags. If your compiler is gcc or clang, use the -Wall compiler flags.
Jabberwocky is right: this is undefined behavior. You should turn your compiler warnings on and listen to them.
However, I think it can still be interesting to see what the compiler was thinking. And we have a tool to do just that: Godbolt Compiler Explorer.
We can plug your C program into Godbolt and see what assembly instructions it outputs. Here's the direct Godbolt link, and here's the assembly that it produces.
add:
push rbp
mov rbp, rsp
mov DWORD PTR [rbp-4], edi
mov DWORD PTR [rbp-8], esi
mov eax, DWORD PTR [rbp-4]
cmp eax, DWORD PTR [rbp-8]
jle .L2
mov eax, DWORD PTR [rbp-4]
imul eax, DWORD PTR [rbp-8]
jmp .L1
.L2:
.L1:
pop rbp
ret
.LC0:
.string "%d"
main:
push rbp
mov rbp, rsp
mov esi, 7
mov edi, 3
call add
mov esi, eax
mov edi, OFFSET FLAT:.LC0
mov eax, 0
call printf
mov eax, 0
pop rbp
ret
Again, to be perfectly clear, what you've done is undefined behavior. With different compiler flags or a different compiler version or even just a compiler that happens to feel like doing things differently on a particular day, you will get different behavior. What I'm studying here is the assembly output by gcc 12.2 on Godbolt with optimizations disabled, and I am not representing this as standard or well-defined behavior.
This engine is using the System V AMD64 calling convention, common on Linux machines. In System V, the first two integer or pointer arguments are passed in the rdi and rsi registers, and integer values are returned in rax. Since everything we work with here is either an int or a char*, this is good enough for us. Note that the compiler seems to have been smart enough to figure out that it only needs edi, esi, and eax, the lower half-words of each of these registers, so I'll start using edi, esi, and eax from this point on.
Our main function works fine. It does everything we'd expect. Our two function calls are here.
mov esi, 7
mov edi, 3
call add
mov esi, eax
mov edi, OFFSET FLAT:.LC0
mov eax, 0
call printf
To call add, we put 3 in the edi register and 7 in the esi register and then we make the call. We get the return value back from add in eax, and we move it to esi (since it will be the second argument to printf). We put the address of the static memory containing "%d" in edi (the first argument), and then we call printf. This is all normal. main knows that add was declared to return an integer, so it has the right to assume that, after calling add, there will be something useful in eax.
Now let's look at add.
add:
push rbp
mov rbp, rsp
mov DWORD PTR [rbp-4], edi
mov DWORD PTR [rbp-8], esi
mov eax, DWORD PTR [rbp-4]
cmp eax, DWORD PTR [rbp-8]
jle .L2
mov eax, DWORD PTR [rbp-4]
imul eax, DWORD PTR [rbp-8]
jmp .L1
.L2:
.L1:
pop rbp
ret
The rbp and rsp shenanigans are standard function call fare and aren't specific to add. First, we load our two arguments onto the call stack as local variables. Now here's where the undefined behavior comes in. Remember that I said eax is the return value of our function. Whatever happens to be in eax when the function returns is the returned value.
We want to compare a and b. To do that, we need a to be in a register (lots of assembly instructions require their left-hand argument to be a register, while the right-hand can be a register, reference, immediate, or just about anything). So we load a into eax. Then we compare the value in eax to the value b on the call stack. If a > b, then the jle does nothing. We go down to the next two lines, which are the inside of your if statement. They correctly set eax and return a value.
However, if a <= b, then the jle instruction jumps to the end of the function without doing anything else to eax. Since the last thing in eax happened to be a (because we happened to use eax as our comparison register in cmp), that's what gets returned from our function.
But this really is just random. It's what the compiler happened to have put in that register previously. If I turn optimizations up (with -O3), then gcc inlines the whole function call and ends up printing out 0 rather than a. I don't know exactly what sequence of optimizations led to this conclusion, but since they started out by hinging on undefined behavior, the compiler is free to make what assumptions it chooses.

Why are more x86 instructions faster than less? [duplicate]

This question already has answers here:
Why is the loop instruction slow? Couldn't Intel have implemented it efficiently?
(3 answers)
How to load a single byte from address in assembly
(1 answer)
Why doesn't GCC use partial registers?
(3 answers)
How exactly do partial registers on Haswell/Skylake perform? Writing AL seems to have a false dependency on RAX, and AH is inconsistent
(2 answers)
What considerations go into predicting latency for operations on modern superscalar processors and how can I calculate them by hand?
(1 answer)
Closed 1 year ago.
So I've been reading up about the what goes on inside x86 processors for about half a year now. So I decided to try my hand at x86 assembly for fun, starting only with 80386 instructions to keep it simple. (I'm trying to learn mostly, not optimize)
I have a game I made a few months ago coded in C, so I went there and rewrote the bitmap blitting function from scratch with assembly code. What I don't get is that the main pixel plotting body of the loop is faster with the C code (which is 18 instructions) than my assembly code (which is only 7 instructions, and I'm almost 100% certain it doesn't straddle cache line boundaries).
So my main question is why do 18 instructions take less time than the 7 instructions?
At the bottom I have the 2 code snippets.
PS. Each color is 8 bit indexed.
C Code:
{
for (x = 0; x < src.w; x++)
00D35712 mov dword ptr [x],0 // Just initial loop setup
00D35719 jmp Renderer_DrawBitmap+174h (0D35724h) // Just initial loop setup
00D3571B mov eax,dword ptr [x]
00D3571E add eax,1
00D35721 mov dword ptr [x],eax
00D35724 mov eax,dword ptr [x]
00D35727 cmp eax,dword ptr [ebp-28h]
00D3572A jge Renderer_DrawBitmap+1BCh (0D3576Ch)
{
*dest_pixel = renderer_trans[renderer_light[*src_pixel][light]][*dest_pixel][trans];
// Start of what I consider the body
00D3572C mov eax,dword ptr [src_pixel]
00D3572F movzx ecx,byte ptr [eax]
00D35732 mov edx,dword ptr [light]
00D35735 movzx eax,byte ptr renderer_light (0EDA650h)[edx+ecx*8]
00D3573D shl eax,0Bh
00D35740 mov ecx,dword ptr [dest_pixel]
00D35743 movzx edx,byte ptr [ecx]
00D35746 lea eax,renderer_trans (0E5A650h)[eax+edx*8]
00D3574D mov ecx,dword ptr [dest_pixel]
00D35750 mov edx,dword ptr [trans]
00D35753 mov al,byte ptr [eax+edx]
00D35756 mov byte ptr [ecx],al
dest_pixel++;
00D35758 mov eax,dword ptr [dest_pixel]
00D3575B add eax,1
00D3575E mov dword ptr [dest_pixel],eax
src_pixel++;
00D35761 mov eax,dword ptr [src_pixel]
00D35764 add eax,1
00D35767 mov dword ptr [src_pixel],eax
// End of what I consider the body
}
00D3576A jmp Renderer_DrawBitmap+16Bh (0D3571Bh)
And the assembly code I wrote:
(esi is the source pixel, edi is the screen buffer, edx is the light level, ebx is the transparency level, and ecx is the width of this row)
drawing_loop:
00C55682 movzx ax,byte ptr [esi]
00C55686 mov ah,byte ptr renderer_light (0DFA650h)[edx+eax*8]
00C5568D mov al,byte ptr [edi]
00C5568F mov al,byte ptr renderer_trans (0D7A650h)[ebx+eax*8]
00C55696 mov byte ptr [edi],al
00C55698 inc esi
00C55699 inc edi
00C5569A loop drawing_loop (0C55682h)
// This isn't just the body this is the full row plotting loop just like the code above there
And for context, the pixel is lighted with a LUT and the transparency is done also with a LUT.
Pseudo C code:
//transparencyLUT[new][old][transparency level (0 = opaque, 7 = full transparency)]
//lightLUT[color][light level (0 = white, 3 = no change, 7 = full black)]
dest_pixel = transparencyLUT[lightLUT[source_pixel][light]]
[screen_pixel]
[transparency];
What gets me is how I use pretty much the same instructions the C code does, but just less of them?
If you need more info I'll be happy to give more, I just don't want this to be a huge question. I'm just genuinely curious because I'm sorta new to x86 assembly programming and want to learn more about how our cpus actually work.
My only guess is that the out of order execution engine doesn't like my code because its all memory accesses moving to the same register.
Not all instructions take the same time, modern implementations of the CPU can execute (parts of) some instructions in parallel (as long as one doesn't read the data written by the previous one and the required units don't collide). Latest versions do translate the "machine" instructions into lower level, very simple ones, that are scheduled on the fly to be executed on the various units in the CPU in parallel as much as possible, using a whole lot of shadow registers (i.e., one instruction can be using the value in one copy of %eax (old value) after another instruction writes a new value into another copy of %eax (new value), thus decoupling instructions even more. The hoops they jump through for performance's sake...

Does function parameter names has a place in memory in C?

I dont think function parameter names are treated like variables and they doesnt get stored in memory. But I dont get how we can use these parameters in functions as variables if they dont have any place in memory. Can anyone explain me whats going on with function parameters and if they have place or not in memory
All variables are either allocated somewhere or optimized away in case the compiler found them unnecessary. Function parameters are variables and are almost certainly stored either in a CPU register or on the stack, if they are used by the program.
The only time when they might not get allocated is when the function is inlined - when the whole function call is optimized away and the function code is instead injected in the caller-side machine code. In such cases the original variables used by the caller might be used instead.
Function parameter names however are not stored anywhere in the final executable, just like any other identifier isn't stored there either. Names of variables, functions etc only exist in the source code, for the benefit of the programmer alone.
Although your title asks about “function parameter names,” it appears your question is about function parameters, which are different.
Commonly, arguments are passed to functions by putting them in processor registers or on the hardware stack. Each computing platform has some specification of which arguments should be passed where. For example, the first few small arguments (such as int values) may be passed in certain processor registers, while more or larger arguments may be put on the stack.
To the called function, these are parameters. The called function uses them from the processor registers or the stack.
I'm writing this answer assuming you know what the stack and CPU registers are. If you don't, I'd suggest you look them up before seeing this answer.
I dont think function parameter names are treated like variables and they doesnt get stored in memory.
At the assembly level, function parameter names don't really exist. But for function parameters, it depends on the assembly generated based on the compiler's level of optimization. Consider this simple function:
int foo(int a, int b)
{
return a + b;
}
Using Compiler Explorer, I checked the generated disassembly of x64 GCC 10.2. On -O0, it looks like this:
foo:
push rbp
mov rbp, rsp
mov DWORD PTR [rbp-4], edi
mov DWORD PTR [rbp-8], esi
mov edx, DWORD PTR [rbp-4]
mov eax, DWORD PTR [rbp-8]
add eax, edx
pop rbp
ret
These two lines:
mov DWORD PTR [rbp-4], edi
mov DWORD PTR [rbp-8], esi
interestingly show that the arguments are passed to edi and esi for a and b respectively, and then moved into the stack, presumably in case the registers need to be used elsewhere in the function. The rest of the function uses the space in the stack as opposed to the registers:
mov edx, DWORD PTR [rbp-4]
mov eax, DWORD PTR [rbp-8]
add eax, edx
(In case you didn't know, eax/rax generally holds the value for functions and edx in this case just serves as a general-purpose register, so these three lines are besaically eax = a; edx = b; eax += edx).
Ok, so that makes sense. The arguments are passed to registers and copied to the stack, where they are used for the rest of the function. What about -O1?
foo:
lea eax, [rdi+rsi]
ret
Now that is a lot shorter. Here, eax gets the value of rdi + rsi and the function ends. All the copying to the stack is completely skipped and the registers are used directly. So yes, in this case, the memory is never used.
EDIT
After writing this answer, I went and checked the generated assembly with the -m32 option and noticed that arguments were always pushed to the stack before the function was called. Assembly generated from -O0 looks like this:
foo:
push ebp
mov ebp, esp
mov edx, DWORD PTR [ebp+8]
mov eax, DWORD PTR [ebp+12]
add eax, edx
pop ebp
ret
Here, since the arguments are passed to the stack before the function is called, they don't have be copied from the registers to the stack (because they're already there). So the function is shorter, and amount of registers used is reduced. However, on higher levels of optimization, the function ends up becoming longer because of this:
foo:
mov eax, DWORD PTR [esp+8]
add eax, DWORD PTR [esp+4]
ret
So with -m32 set, parameters are always placed in memory.

C Pointer to EFLAGS using NASM

For a task at my school, I need to write a C Program which does a 16 bit addition using an assembly programm. Besides the result the EFLAGS shall also be returned.
Here is my C Program:
int add(int a, int b, unsigned short* flags);//function must be used this way
int main(void){
unsigned short* flags = NULL;
printf("%d\n",add(30000, 36000, flags);// printing just the result
return 0;
}
For the moment the Program just prints out the result and not the flags because I am unable to get them.
Here is the assembly program for which I use NASM:
global add
section .text
add:
push ebp
mov ebp,esp
mov ax,[ebp+8]
mov bx,[ebp+12]
add ax,bx
mov esp,ebp
pop ebp
ret
Now this all works smoothly. But I have no idea how to get the pointer which must be at [ebp+16] pointing to the flag register. The professor said we will have to use the pushfd command.
My problem just lies in the assembly code. I will modify the C Program to give out the flags after I get the solution for the flags.
Normally you'd just use a debugger to look at flags, instead of writing all the code to get them into a C variable for a debug-print. Especially since decent debuggers decode the condition flags symbolically for you, instead of or as well as showing a hex value.
You don't have to know or care which bit in FLAGS is CF and which is ZF. (This information isn't relevant for writing real programs, either. I don't have it memorized, I just know which flags are tested by different conditions like jae or jl. Of course, it's good to understand that FLAGS are just data that you can copy around, save/restore, or even modify if you want)
Your function args and return value are int, which is 32-bit in the System V 32-bit x86 ABI you're using. (links to ABI docs in the x86 tag wiki). Writing a function that only looks at the low 16 bits of its input, and leaves high garbage in the high 16 bits of the output is a bug. The int return value in the prototype tells the compiler that all 32 bits of EAX are part of the return value.
As Michael points out, you seem to be saying that your assignment requires using a 16-bit ADD. That will produce carry, overflow, and other conditions with different inputs than if you looked at the full 32 bits. (BTW, this article explains carry vs. overflow very well.)
Here's what I'd do. Note the 32-bit operand size for the ADD.
global add
section .text
add:
push ebp
mov ebp,esp ; stack frames are optional, you can address things relative to ESP
mov eax, [ebp+8] ; first arg: No need to avoid loading the full 32 bits; the next insn doesn't care about the high garbage.
add ax, [ebp+12] ; low 16 bits of second arg. Operand-size implied by AX
cwde ; sign-extend AX into EAX
mov ecx, [ebp+16] ; the pointer arg
pushf ; the simple straightforward way
pop edx
mov [ecx], dx ; Store the low 16 of what we popped. Writing word [ecx] is optional, because dx implies 16-bit operand-size
; be careful not to do a 32-bit store here, because that would write outside the caller's object.
; mov esp,ebp ; redundant: ESP is still pointing at the place we pushed EBP, since the push is balanced by an equal-size pop
pop ebp
ret
CWDE (the 16->32 form of the 8086 CBW instruction) is not to be confused with CWD (the AX -> DX:AX 8086 instruction). If you're not using AX, then MOVSX / MOVZX are a good way to do this.
The fun way: instead of using the default operand size and doing 32-bit push and pop, we can do a 16-bit pop directly into the destination memory address. That would leave the stack unbalanced, so we could either uncomment the mov esp, ebp again, or use a 16-bit pushf (with an operand-size prefix, which according to the docs makes it only push the low 16 FLAGS, not the 32-bit EFLAGS.)
; What I'd *really* do: maximum efficiency if I had to use the 32-bit ABI with args on the stack, instead of args in registers
global add
section .text
add:
mov eax, [esp+4] ; first arg, first thing above the return address
add ax, [esp+8] ; second arg
cwde ; sign-extend AX into EAX
mov ecx, [esp+12] ; the pointer
pushfw ; push the low 16 of FLAGS
pop word [ecx] ; pop into memory pointed to by unsigned short* flags
ret
Both PUSHFW and POP WORD will assemble with an operand-size prefix. output from objdump -Mintel, which uses slightly different syntax from NASM:
4000c0: 66 9c pushfw
4000c2: 66 8f 01 pop WORD PTR [ecx]
PUSHFW is the same as o16 PUSHF. In NASM, o16 applies the operand-size prefix.
If you only needed the low 8 flags (not including OF), you could use LAHF to load FLAGS into AH and store that.
PUSHFing directly into the destination is not something I'd recommend. Temporarily pointing the stack at some random address is not safe in general. Programs with signal handlers will use the space below the stack asynchronously. This is why you have to reserve stack space before using it with sub esp, 32 or whatever, even if you're not going to make a function call that would overwrite it by pushing more stuff on the stack. The only exception is when you have a red-zone.
You C caller:
You're passing a NULL pointer, so of course your asm function segfaults. Pass the address of a local to give the function somewhere to store to.
int add(int a, int b, unsigned short* flags);
int main(void) {
unsigned short flags;
int result = add(30000, 36000, &flags);
printf("%d %#hx\n", result, flags);
return 0;
}
This is just a simple approach. I didn't test it, but you should get the idea...
Just set ESP to the pointer value, increment it by 2 (even for 32-bit arch) and PUSHF like this:
global add
section .text
add:
push ebp
mov ebp,esp
mov ax,[ebp+8]
mov bx,[ebp+12]
add ax,bx
; --- here comes the mod
mov esp, [ebp+16] ; this will set ESP to the pointers address "unsigned short* flags"
lea esp, [esp+2] ; adjust ESP to address above target
db 66h ; operand size prefix for 16-bit PUSHF (alternatively 'db 0x66', depending on your assembler
pushf ; this will save the lower 16-bits of EFLAGS to WORD PTR [EBP+16] = [ESP+2-2]
; --- here ends the mod
mov esp,ebp
pop ebp
ret
This should work, because PUSHF decrements ESP by 2 and then saves the value to WORD PTR [ESP]. Therefore it had to be increased before using the pointer address. Setting ESP to the appropriate value enables you to denominate the direct target of PUSHF.

Why is the compiler generating a push/pop instruction pair?

I compiled the code below with the VC++ 2010 compiler:
__declspec(dllexport)
unsigned int __cdecl __mm_getcsr(void) { return _mm_getcsr(); }
and the generated code was:
push ECX
stmxcsr [ESP]
mov EAX, [ESP]
pop ECX
retn
Why is there a push ECX/pop ECX instruction pair?
The compiler is making room on the stack to store the MXCSR. It could have equally well done this:
sub esp,4
stmxcsr [ESP]
mov EAX, [ESP]
add esp,4
retn
But "push ecx" is probably shorter or faster.
The push here is used to allocate 4 bytes of temporary space. [ESP] would normally point to the pushed return address, which we cannot overwrite.
ECX will be overwritten here, however, ECX is a probably a volatile register in the ABI you're targeting, so functions don't have to preserve ECX.
The reason a push/pop is used here is a space (and possibly speed) optimization.
It creates an top-of-stack entry that ESP now refers to as the target for the stmxcsr instruction. Then the result is stored in EAX for the return.

Resources