I have a C program which has a function decod and the function has the following statements.
My decode.c script:
int decod(int x, int y, int z) {
int ty = y;
ty = ty - z;
int py = ty;
py = py << 31;
py = py >> 31;
ty = ty * x;
py = py ^ ty;
}
The assembly code of this program (generated by gcc -S decod.c) shows the following code.
movl %edi, -20(%rbp)
movl %esi, -24(%rbp)
movl %edx, -28(%rbp)
movl -24(%rbp), %eax
movl %eax, -8(%rbp)
movl -28(%rbp), %eax
subl %eax, -8(%rbp)
movl -8(%rbp), %eax
movl %eax, -4(%rbp)
sall $31, -4(%rbp)
sarl $31, -4(%rbp)
movl -8(%rbp), %eax
imull -20(%rbp), %eax
movl %eax, -8(%rbp)
movl -8(%rbp), %eax
xorl %eax, -4(%rbp)
popq %rbp
.cfi_def_cfa 7, 8
ret
But, I want the program generate an assembly file with only the following lines of code.
subl %edx, %esi
movl %esi, %eax
sall $31, %eax
sarl $31, %eax
imull %edi, %esi
xorl %esi, %eax
ret
I know I am pretty close to write a program which will generate the above mentioned code. But, I am clueless why the script generates different assembly code. Any direction will be helpful.
If you compile your function as is, in optimization level3, -O3 the entire function is optimized out. This is because there is no return value and py and ty are anyways discarded after the function.
For reference the code is below
.globl decod
.def decod; .scl 2; .type 32; .endef
.seh_proc decod
decod:
.seh_endprologue
ret
.seh_endproc
If however, you add a return py; at the end the code generated is as follows.
.globl decod
.def decod; .scl 2; .type 32; .endef
.seh_proc decod
decod:
.seh_endprologue
subl %r8d, %edx
movl %edx, %eax
imull %edx, %ecx
sall $31, %eax
sarl $31, %eax
xorl %ecx, %eax
ret
.seh_endproc
This is functionally identical to what you are expecting.
Related
So I'm pretty much a noob in IA32 assembly language. I tried compiling this C function into IA32 assembly (-mpreferred-stack-boundary=2):
__attribute__((cdecl))
int odd_sum(int a[], int n, int sum) {
if (n == 0) return sum;
else if ((a[n-1] % 2) == 0)
return odd_sum(a, n-1, sum);
else return odd_sum(a, n-1, sum + a[n-1]);
}
and the GCC outputs this:
.file "test.c"
.text
.globl _odd_sum
.def _odd_sum; .scl 2; .type 32; .endef
_odd_sum:
LFB0:
.cfi_startproc
pushl %ebp
.cfi_def_cfa_offset 8
.cfi_offset 5, -8
movl %esp, %ebp
.cfi_def_cfa_register 5
subl $12, %esp
cmpl $0, 12(%ebp)
jne L2
movl 16(%ebp), %eax
jmp L3
L2:
movl 12(%ebp), %eax
addl $1073741823, %eax
leal 0(,%eax,4), %edx
movl 8(%ebp), %eax
addl %edx, %eax
movl (%eax), %eax
andl $1, %eax
testl %eax, %eax
jne L4
movl 12(%ebp), %eax
leal -1(%eax), %edx
movl 16(%ebp), %eax
movl %eax, 8(%esp)
movl %edx, 4(%esp)
movl 8(%ebp), %eax
movl %eax, (%esp)
call _odd_sum
jmp L3
L4:
movl 12(%ebp), %eax
addl $1073741823, %eax
leal 0(,%eax,4), %edx
movl 8(%ebp), %eax
addl %edx, %eax
movl (%eax), %edx
movl 16(%ebp), %eax
addl %eax, %edx
movl 12(%ebp), %eax
subl $1, %eax
movl %edx, 8(%esp)
movl %eax, 4(%esp)
movl 8(%ebp), %eax
movl %eax, (%esp)
call _odd_sum
L3:
leave
.cfi_restore 5
.cfi_def_cfa 4, 4
ret
.cfi_endproc
LFE0:
.ident "GCC: (MinGW.org GCC-8.2.0-3) 8.2.0"
What I am not able to comprehend are these 2 lines:
addl $1073741823, %eax
leal 0(,%eax,4), %edx
I understand those 2 lines should have something to do with the a[n-1], but I can't seem to be able to understand what exactly they do in the process. Can someone help me with this problem please?
It is just a fancy way of computing the offset into the array a[n-1].
1073741823 is 0x3fffffff. If n is 3, for example, it will add them and get 0x40000002. Then it multiplies by 4 with the second instruction, which results in 0x00000008, discarding the top bits.
So we are left with an offset of 8 bytes, which is exactly the offset (in bytes) that you need for a[n-1], i.e. a[2] (when the size of an int is 4 bytes).
To get a more understandable output with the -S flag:
create assembler code:
c++ -S -fverbose-asm -g -O2 (other optimizaton flags) test.cc -o test.s
create asm interlaced with source lines:
as -alhnd test.s > test.lst
So I'm learning how to convert the assembly into readable C code. The assembly is as follows...
Consider the compiler places the C variables: a at -4(%rbp), b at -8(%rbp), and c at -12(rbp).
file "main.c"
.text
.globl main
.type main, #function
main:
endbr64
pushq %rbp
movq %rsp, %rbp
movl $10, -12(%rbp)
movl $20, -4(%rbp)
movl $1, -8(%rbp)
.L4:
cmpl $1, -12(%rbp)
je .L7
movl -8(%rbp), %eax
imull -12(%rbp), %eax
movl %eax, -8(%rbp)
subl $1, -12(%rbp)
jmp .L4
.L7:
nop
movl -4(%rbp), %eax
imull -8(%rbp), %eax
movl %eax, -4(%rbp)
movl $0, %eax
popq %rbp
ret
This is what I have so far.
int c = 10;
int a = 20;
int b = 1;
for(c = 10; c > 1; c--)
{
int x = b;
x = c * x;
b = x;
}
Not completely sure how correct that is. The part that confuses me the most is the appearance (from what seems like out of nowhere) of eax. When eax appears, should I just assume that it is some other random variable? (hence the integer x I introduced)
I have generated two assembly files - one that is optimized, and one that is not. The assembly-language code generated with optimization on should be more efficient than the other assembly-language code. I am more interested in how the efficiency is achieved. To my understanding, in the non-optimized version there will always have to be an offset call to the register %rbp to find the address. In the optimized version, the addresses are being stored in the registers, so you don't have to rely and call on %rbp to find them.
Am I correct? And if so, would there ever be a time when the optimized version will not be advantageous? Thank you for your time.
Here is a function that converts from 42 GIF to CYMK.
void rgb2cmyk(int r, int g, int b, int ret[]) {
int c = 255 - r;
int m = 255 - g;
int y = 255 - b;
int k = (c < m) ? (c < y ? c : y) : (m < y ? m : y);
c -= k; m -= k; y -= k;
ret[0] = c; ret[1] = m; ret[2] = y; ret[3] = k;
}
Here is the assembly-language code that has not been optimized. Note I have made notes using ;; in the code.
No Opt:
.section __TEXT,__text,regular,pure_instructions
.globl _rgb2cmyk
.align 4, 0x90
_rgb2cmyk: ## #rgb2cmyk
.cfi_startproc
## BB#0:
pushq %rbp
Ltmp2:
.cfi_def_cfa_offset 16
Ltmp3:
.cfi_offset %rbp, -16
movq %rsp, %rbp
Ltmp4:
.cfi_def_cfa_register %rbp
;;initializing variable c, m, y
movl $255, %eax
movl %edi, -4(%rbp)
movl %esi, -8(%rbp)
movl %edx, -12(%rbp)
movq %rcx, -24(%rbp)
movl %eax, %edx
subl -4(%rbp), %edx
movl %edx, -28(%rbp)
movl %eax, %edx
subl -8(%rbp), %edx
movl %edx, -32(%rbp)
subl -12(%rbp), %eax
movl %eax, -36(%rbp)
movl -28(%rbp), %eax
;;compare
cmpl -32(%rbp), %eax
jge LBB0_5
## BB#1:
movl -28(%rbp), %eax
cmpl -36(%rbp), %eax
jge LBB0_3
## BB#2:
movl -28(%rbp), %eax
movl %eax, -44(%rbp) ## 4-byte Spill
jmp LBB0_4
LBB0_3:
movl -36(%rbp), %eax
movl %eax, -44(%rbp) ## 4-byte Spill
LBB0_4:
movl -44(%rbp), %eax ## 4-byte Reload
movl %eax, -48(%rbp) ## 4-byte Spill
jmp LBB0_9
LBB0_5:
movl -32(%rbp), %eax
cmpl -36(%rbp), %eax
jge LBB0_7
## BB#6:
movl -32(%rbp), %eax
movl %eax, -52(%rbp) ## 4-byte Spill
jmp LBB0_8
LBB0_7:
movl -36(%rbp), %eax
movl %eax, -52(%rbp) ## 4-byte Spill
LBB0_8:
movl -52(%rbp), %eax ## 4-byte Reload
movl %eax, -48(%rbp) ## 4-byte Spill
LBB0_9:
movl -48(%rbp), %eax ## 4-byte Reload
movl %eax, -40(%rbp)
movl -40(%rbp), %eax
movl -28(%rbp), %ecx
subl %eax, %ecx
movl %ecx, -28(%rbp)
movl -40(%rbp), %eax
movl -32(%rbp), %ecx
subl %eax, %ecx
movl %ecx, -32(%rbp)
movl -40(%rbp), %eax
movl -36(%rbp), %ecx
subl %eax, %ecx
movl %ecx, -36(%rbp)
movl -28(%rbp), %eax
movq -24(%rbp), %rdx
movl %eax, (%rdx)
movl -32(%rbp), %eax
movq -24(%rbp), %rdx
movl %eax, 4(%rdx)
movl -36(%rbp), %eax
movq -24(%rbp), %rdx
movl %eax, 8(%rdx)
movl -40(%rbp), %eax
movq -24(%rbp), %rdx
movl %eax, 12(%rdx)
popq %rbp
retq
.cfi_endproc
.subsections_via_symbols
Optimization:
.section __TEXT,__text,regular,pure_instructions
.globl _rgb2cmyk
.align 4, 0x90
_rgb2cmyk: ## #rgb2cmyk
.cfi_startproc
## BB#0:
pushq %rbp
Ltmp2:
.cfi_def_cfa_offset 16
Ltmp3:
.cfi_offset %rbp, -16
movq %rsp, %rbp
Ltmp4:
.cfi_def_cfa_register %rbp
movl $255, %r8d
movl $255, %eax
subl %edi, %eax
movl $255, %edi
subl %esi, %edi
subl %edx, %r8d
cmpl %edi, %eax ##;; compare m and c
jge LBB0_2
## BB#1: ;; c < m
cmpl %r8d, %eax ## compare y and c
movl %r8d, %edx
cmovlel %eax, %edx
jmp LBB0_3
LBB0_2: ##;; c >= m
cmpl %r8d, %edi ## compare y and m
movl %r8d, %edx
cmovlel %edi, %edx
LBB0_3:
subl %edx, %eax
subl %edx, %edi
subl %edx, %r8d
movl %eax, (%rcx)
movl %edi, 4(%rcx)
movl %r8d, 8(%rcx)
movl %edx, 12(%rcx)
popq %rbp
retq
.cfi_endproc
.subsections_via_symbols
Yes. The optimized version performs many fewer memory read operations by storing intermediate values in registers and not reloading them over and over.
You are using call wrong. It is a technical term that means to push a return address on the stack and branch to a new location for instructions. The term you mean is simply to use the register.
Can you think of a reason that longer, slower code is "better"?
.file "calcnew.c"
.text
.globl calcnew
.type calcnew, #function
calcnew:
pushl %ebp
movl %esp, %ebp
movl 8(%ebp), %edx
movl 12(%ebp), %ecx
leal 0(,%ecx,8), %eax
subl %ecx, %eax
leal (%edx,%edx,2), %edx
addl %edx, %eax
imull $14, 16(%ebp), %edx
addl %edx, %eax
popl %ebp
ret
.size calcnew, .-calcnew
.ident "GCC: (Ubuntu 4.3.3-5ubuntu4) 4.3.3"
.section .note.GNU-stack,"",#progbits
I want to remove all leal and imull operations in this assembly code and replace their functionality with only sall addl and subl operations. Here is my attempt:
.file "calcnew.c"
.text
.globl calcnew
.type calcnew, #function
calcnew:
pushl %ebp
movl %esp, %ebp
movl 8(%ebp), %edx
movl 12(%ebp), %ecx
sall $3, %ecx
movl %ecx, %eax
;leal 0(,%ecx,8), %eax
subl %ecx, %eax
movl %edx, %ecx
sall $1, %edx
addl %ecx, %edx
;leal (%edx,%edx,2), %edx
addl %edx, %eax
movl 16(%ebp), %edx
movl %edx, %ecx
sall $4, %edx
sall $1, %ecx
subl %ecx, %edx
;imull $14, 16(%ebp), %edx
addl %edx, %eax
popl %ebp
ret
.size calcnew, .-calcnew
.ident "GCC: (Ubuntu 4.3.3-5ubuntu4) 4.3.3"
.section .note.GNU-stack,"",#progbits
The output when compiling my c code with the new assembly file should be the same as compiling with the original assembly file (using leal and mull operations), however, my output is wrong. What did I do wrong?
Here is the C code calling the assembler file:
#include <stdio.h>
int calcnew(int x, int y, int z);
int main()
{
int x = 2;
int y = 6;
int z = 11;
int result;
result = calcnew(x,y,z);
printf("x=%d, y=%d, z=%d, result=%d\n",x,y,z,result);
}
You replace
leal 0(,%ecx,8), %eax
With
sall $3, %ecx
movl %ecx, %eax
But this clobbers the value of ecx, which is live out of those instructions. Instead:
movl %ecx, %eax
sall $3, %eax
I guess you all heard of the 'swap problem'; SO is full of questions about it.
The version of the swap without use of a third variable is often considered to be faster since, well, you have one variable less. I wanted to know what was going on behind the curtains and wrote the following two programs:
int main () {
int a = 9;
int b = 5;
int swap;
swap = a;
a = b;
b = swap;
return 0;
}
and the version without third variable:
int main () {
int a = 9;
int b = 5;
a ^= b;
b ^= a;
a ^= b;
return 0;
}
I generated the assembly code using clang and got this for the first version (that uses a third variable):
...
Ltmp0:
movq %rsp, %rbp
Ltmp1:
movl $0, %eax
movl $0, -4(%rbp)
movl $9, -8(%rbp)
movl $5, -12(%rbp)
movl -8(%rbp), %ecx
movl %ecx, -16(%rbp)
movl -12(%rbp), %ecx
movl %ecx, -8(%rbp)
movl -16(%rbp), %ecx
movl %ecx, -12(%rbp)
popq %rbp
ret
Leh_func_end0:
...
and this for the second version (that does not use a third variable):
...
Ltmp0:
movq %rsp, %rbp
Ltmp1:
movl $0, %eax
movl $0, -4(%rbp)
movl $9, -8(%rbp)
movl $5, -12(%rbp)
movl -12(%rbp), %ecx
movl -8(%rbp), %edx
xorl %ecx, %edx
movl %edx, -8(%rbp)
movl -8(%rbp), %ecx
movl -12(%rbp), %edx
xorl %ecx, %edx
movl %edx, -12(%rbp)
movl -12(%rbp), %ecx
movl -8(%rbp), %edx
xorl %ecx, %edx
movl %edx, -8(%rbp)
popq %rbp
ret
Leh_func_end0:
...
The second one is longer but I don't know much about assembly code so I have no idea if that means that it is slower so I'd like to hear the opinion of someone more knowledgable about it.
Which of the above versions of a variable swap is faster and takes less memory?
Look at some optimised assembly. From
void swap_temp(int *restrict a, int *restrict b){
int temp = *a;
*a = *b;
*b = temp;
}
void swap_xor(int *restrict a, int *restrict b){
*a ^= *b;
*b ^= *a;
*a ^= *b;
}
gcc -O3 -std=c99 -S -o swapping.s swapping.c produced
.file "swapping.c"
.text
.p2align 4,,15
.globl swap_temp
.type swap_temp, #function
swap_temp:
.LFB0:
.cfi_startproc
movl (%rdi), %eax
movl (%rsi), %edx
movl %edx, (%rdi)
movl %eax, (%rsi)
ret
.cfi_endproc
.LFE0:
.size swap_temp, .-swap_temp
.p2align 4,,15
.globl swap_xor
.type swap_xor, #function
swap_xor:
.LFB1:
.cfi_startproc
movl (%rsi), %edx
movl (%rdi), %eax
xorl %edx, %eax
xorl %eax, %edx
xorl %edx, %eax
movl %edx, (%rsi)
movl %eax, (%rdi)
ret
.cfi_endproc
.LFE1:
.size swap_xor, .-swap_xor
.ident "GCC: (SUSE Linux) 4.5.1 20101208 [gcc-4_5-branch revision 167585]"
.section .comment.SUSE.OPTs,"MS",#progbits,1
.string "Ospwg"
.section .note.GNU-stack,"",#progbits
To me, swap_temp looks as efficient as can be.
The problem with XOR swap trick is that it's strictly sequential. It may seem deceptively fast, but in reality, it is not. There's an instruction called XCHG that swaps two registers, but this can also be slower than simply using 3 MOVs, due to its atomic nature. The common technique with temp is an excellent choice ;)
To get an idea of the cost imagine that every command has a cost to be performed and also the indirect addressing has its own cost.
movl -12(%rbp), %ecx
This line will need something like a time unit for accessing the value in ecx register,
one time unit for accessing rbp, another one for applying the offset (-12) and more time
units (let's say arbitrarily 3) for moving the value from the address stored in ecx to the
address indicated from -12(%rbp).
If you count all the operations in every line and all line, the second method is for sure costlier than the first one.