Sorry i was not specific with the problem. i am trying to use intrinsic bit-parallelism of a system .a small part of the code is as follows-
int d;
char ch1;
char ch2;
cin>>ch1>>ch2;
if((d&1) == 0) {
//heavy computation
}
if(ch1 == ch2){
//heavy computation
}
first if condition execute if lsb of d is set.
how many clock cycles the two 'if' conditions require to execute?
include the clock cycles required to convert the variable values in binary form.
On a i386 architecture and with gcc the assembly code produced for the abode conditions would be,
for condition 1:
subl $16, %esp
movb $97, -2(%ebp)
movb $98, -1(%ebp)
movl -12(%ebp), %eax
andl $1, %eax
testl %eax, %eax
jne .L2
for condition 2:
movzbl -2(%ebp), %eax
cmpb -1(%ebp), %al
jne .L4
So for simplicity we consider the i386 is a MIPS with RISC core and it fallows the fallowing table:
number of clock cycles for the above statements would be 18.
Actually when you compile with "gcc -S file.c" the assembly for the 2 conditions is not produced as the compiler might go for the optimization of the null conditions(ineffective conditions or the dead code), so try to include some useful statements inside the conditions and compile the code you would get the above stated instructions.
With any good compiler, the if statements shown in this question would not consume any processor cycles in an executing program. This is because the compiler would recognize that neither of the if statements does anything, regardless of whether the condition is true or false, so they would be removed during optimization.
In general, optimization can dramatically transform a program. Even if the if statements had statements in their then-clauses, the compiler could determine at compile-time that ch1 does not equal ch2, so there is no need to perform the comparison during program execution.
Beyond that, if a condition is tested during program execution, there is often not a clear correlation between evaluating the test and how many processor cycles it takes. Modern processors are quite complicated, and a test and branch might be executed speculatively in advance while other instructions are also executing, so that the if statement does not cost the program any time at all. On the other hand, executing a branch might cause the processor to discard many instructions it had been preparing to execute and to reload new instructions from the new branch destination, thus costing the program many cycles.
In fact, both of these effects might occur for the same if statement in the same program. When the if statement is used in a loop with many executions, the processor may cache information about the branch decision and use that to speed up execution. At another time, when the if statement happens to be executed just once (because the loop conditions are different), the cached information may mislead the processor and cost cycles.
Probably you can compile your complete code and disassemble it using GDB. Once disassembled find out number and type (Load (5 cycles) Store (4 cycles) Branch (3 cycles) Jump (3 cycles) etc.,) of instructions your mentioned statements took. Sum of such cycles result to clock cycles consumed. However this depends on what processor you are on.
By looking at your question, i think you need to calculate number of instruction executed for your statement and then calculate cycles for every instruction in your if else
Code:
if(x == 0)
{
x = 1;
}
x++;
This will consume following number of instructions
mov eax, $x
cmp eax, 0
jne end
mov eax, 1
end:
inc eax
mov $x, eax
so first if statement will consume 2cpu cycles
Adding to your particular code
cin>>ch1>>ch2;
if((d&1) == 0) {
//heavy computation
}
if(ch1 == ch2){
//heavy computation
}
you need to get instruction required in those two if else operations from which you can calculate cycles.
Also you need to add something inside ( if(){body} ) in body of if statements else modern compilers are intelligent remove your code considering it is dead code.
It depends on your "IF".
Take this to the simplest case that you want to compare two bytes, you probably only need 2 clock cycles in an instruction, ie. 1111 0001 which means (1st) activating ALU-CMP and setting data from R0 to TMP; (2nd) carrying R1 onto the bus and setting the output to ACC.
Otherwise, you will need at least other 3 clocks for fetching, 1 clock for checking I/O interrupt, and 1 final clock to reset the instruction register.
Therefore, on the circuit scale, you only need 7 clock cycles to execute an "IF" for 2 bytes. However, you would never write an "IF" just to compare two numbers (represented by two bytes), wouldn't you? 😅
Related
In another thread, I was told that a switch may be better than a lookup table in terms of speed and compactness.
So I'd like to understand the differences between this:
Lookup table
static void func1(){}
static void func2(){}
typedef enum
{
FUNC1,
FUNC2,
FUNC_COUNT
} state_e;
typedef void (*func_t)(void);
const func_t lookUpTable[FUNC_COUNT] =
{
[FUNC1] = &func1,
[FUNC2] = &func2
};
void fsm(state_e state)
{
if (state < FUNC_COUNT)
lookUpTable[state]();
else
;// Error handling
}
and this:
Switch
static void func1(){}
static void func2(){}
void fsm(int state)
{
switch(state)
{
case FUNC1: func1(); break;
case FUNC2: func2(); break;
default: ;// Error handling
}
}
I thought that a lookup table was faster since compilers try to transform switch statements into jump tables when possible.
Since this may be wrong, I'd like to know why!
Thanks for your help!
As I was the original author of the comment, I have to add a very important issue you did not mention in your question. That is, the original was about an embedded system. Presuming this is a typical bare-metal system with integrated Flash, there are very important differences from a PC on which I will concentrate.
Such embedded systems typically have the following constraints.
no CPU cache.
Flash requires waitstates for higher (i.e. >ca. 32MHz) CPU clocks. The actual ratio depends on the die design, low power/high speed process, operating voltage, etc.
To hide waitstates, Flash has wider read-lines than the CPU-bus.
This only works well for linear code with instruction prefetch.
Data accesses disturb instruction prefetch or are stalled until it finished.
Flash might have an internal very small instruction cache.
If any at all, there is an even smaller data-cache.
The small caches result in more frequent trashing (replacing a previous entry before that has been used another time).
For e.g. the STM32F4xx a read takes 6 clocks at 150MHz/3.3V for 128 bits (4 words). So if a data-access is required, chances are good it adds more than 12 clocks delay for all data to be fetched (there are additional cycles involved).
Presuming compact state-codes, for the actual problem, this has the following effects on this architecture (Cortex-M4):
Lookup-table: Reading the function address is a data-access. With all implications mentioned above.
A switch otoh uses a special "table-lookup" instruction which uses code-space data right behind the instruction. So the first entries are possibly already prefetched. Other entries don't break the prefetch. Also the access is a code-acces, thus the data goes into the Flash's instruction cache.
Also note that the switch does not need functions, thus the compiler can fully optimise the code. This is not possible for a lookup table. At least code for function entry/exit is not required.
Due to the aforementioned and other factors, an estimate is hard to tell. It heavily depends on your platform and the code structure. But assuming the system given above, the switch is very likely faster (and clearer, btw.).
First, on some processors, indirect calls (e.g. through a pointer) - like those in your Lookup Table example - are costly (pipeline breakage, TLB, cache effects). It might also be true for indirect jumps...
Then, a good optimizing compiler might inline the call to func1() in your Switch example; then you won't run any prologue or epilogue for an inlined functions.
You need to benchmark to be sure, since a lot of other factors matter on the performance. See also this (and the reference there).
Using a LUT of function pointers forces the compiler to use that strategy. It could in theory compile the switch version to essentially the same code as the LUT version (now that you've added out-of-bounds checks to both). In practice, that's not what gcc or clang choose to do, so it's worth looking at the asm output to see what happened.
(update: gcc -fpie (on by default on most modern Linux distros) likes to make tables of relative offsets, instead of absolute function pointers, so the rodata is position-independent, too. GCC Jump Table initialization code generating movsxd and add?. This could be a missed-optimization, see my answer there for links to gcc bug reports. Manually creating an array of function pointers could work around that.)
I put the code on the Godbolt compiler explorer with both functions in one compilation unit (with gcc and clang output), to see how it actually compiled. I expanded the functions a bit so it wasn't just two cases.
void fsm_switch(int state) {
switch(state) {
case FUNC0: func0(); break;
case FUNC1: func1(); break;
case FUNC2: func2(); break;
case FUNC3: func3(); break;
default: ;// Error handling
}
//prevent_tailcall();
}
void fsm_lut(state_e state) {
if (likely(state < FUNC_COUNT)) // without likely(), gcc puts the LUT on the taken side of this branch
lookUpTable[state]();
else
;// Error handling
//prevent_tailcall();
}
See also
How do the likely() and unlikely() macros in the Linux kernel work and what is their benefit?
x86
On x86, clang makes its own LUT for the switch, but the entries are pointers to within the function, not the final function pointers. So for clang-3.7, the switch happens to compile to code that is strictly worse than the manually-implemented LUT. Either way, x86 CPUs tend to have branch prediction that can handle indirect calls / jumps, at least if they're easy to predict.
GCC uses a sequence of conditional branches (but unfortunately doesn't tail-call directly with conditional branches, which AFAICT is safe on x86. It checks 1, <1, 2, 3, in that order, with mostly not-taken branches until it finds a match.
They make essentially identical code for the LUT: bounds check, zero the upper 32-bit of the arg register with a mov, and then a memory-indirect jump with an indexed addressing mode.
ARM:
gcc 4.8.2 with -mcpu=cortex-m4 -O2 makes interesting code.
As Olaf said, it makes an inline table of 1B entries. It doesn't jump directly to the target function, but instead to a normal jump instruction (like b func3). This is a normal unconditional jump, since it's a tail-call.
Each table destination entry needs significantly more code (Godbolt) if fsm_switch does anything after the call (like in this case a non-inline function call, if void prevent_tailcall(void); is declared but not defined), or if this is inlined into a larger function.
## With void prevent_tailcall(void){} defined so it can inline:
## Unlike in the godbolt link, this is doing tailcalls.
fsm_switch:
cmp r0, #3 # state,
bhi .L5 #
tbb [pc, r0] # state
## There's no section .rodata directive here: the table is in-line with the code, so there's no need for base pointer to be loaded into a reg. And apparently it's even loaded from I-cache, not D-cache
.byte (.L7-.L8)/2
.byte (.L9-.L8)/2
.byte (.L10-.L8)/2
.byte (.L11-.L8)/2
.L11:
b func3 # optimized tail-call
.L10:
b func2
.L9:
b func1
.L7:
b func0
.L5:
bx lr # This is ARM's equivalent of an x86 ret insn
IDK if there's much difference between how well branch prediction works for tbb vs. a full-on indirect jump or call (blx), on a lightweight ARM core. A data access to load the table might be more significant than the two-step jump to a branch instruction you get with a switch.
I've read that indirect branches are poorly predicted on ARM. I'd hope it's not bad if the indirect branch has the same target every time. But if not, I'd assume most ARM cores won't find even short patterns the way big x86 cores will.
Instruction fetch/decode takes longer on x86, so it's more important to avoid bubbles in the instruction stream. This is one reason why x86 CPUs have such good branch prediction. Modern branch predictors even do a good job with patterns for indirect branches, based on history of that branch and/or other branches leading up to it.
The LUT function has to spend a couple instructions loading the base address of the LUT into a register, but otherwise is pretty much like x86:
fsm_lut:
cmp r0, #3 # state,
bhi .L13 #,
movw r3, #:lower16:.LANCHOR0 # tmp112,
movt r3, #:upper16:.LANCHOR0 # tmp112,
ldr r3, [r3, r0, lsl #2] # tmp113, lookUpTable
bx r3 # indirect register sibling call # tmp113
.L13:
bx lr #
# in the .rodata section
lookUpTable:
.word func0
.word func1
.word func2
.word func3
See Mike of SST's answer for a similar analysis on a Microchip dsPIC.
msc's answer and the comments give you good hints as to why performance may not be what you expect. Benchmarking is the rule, but results will vary from one architecture to another, and may change with other versions of the compiler and of course its configuration and options selected.
Note however that your 2 pieces of code do not perform the same validation on state:
The switch will gracefully do nothing is state is not one of the defined values,
The jump table version will invoke undefined behavior for all but the 2 values FUNC1 and FUNC2.
There is no generic way to initialize the jump table with dummy function pointers without making assumptions on FUNC_COUNT. Do get the same behavior, the jump table version should look like this:
void fsm(int state) {
if (state >= 0 && state < FUNC_COUNT && lookUpTable[state] != NULL)
lookUpTable[state]();
}
Try benchmarking this and inspect the assembly code. Here is a handy online compiler for this: http://gcc.godbolt.org/#
On the Microchip dsPIC family of devices a look-up table is stored as a set of instruction addresses in the Flash itself. Performing the look-up involves reading the address from the Flash then calling the routine. Making the call adds another handful of cycles to push the instruction pointer and other bits and bobs (e.g. setting the stack frame) of housekeeping.
For example, on the dsPIC33E512MU810, using XC16 (v1.24) the look-up code:
lookUpTable[state]();
Compiles to (from the disassembly window in MPLAB-X):
! lookUpTable[state]();
0x2D20: MOV [W14], W4 ; get state from stack-frame (not counted)
0x2D22: ADD W4, W4, W5 ; 1 cycle (addresses are 16 bit aligned)
0x2D24: MOV #0xA238, W4 ; 1 cycle (get base address of look-up table)
0x2D26: ADD W5, W4, W4 ; 1 cycle (get address of entry in table)
0x2D28: MOV [W4], W4 ; 1 cycle (get address of the function)
0x2D2A: CALL W4 ; 2 cycles (push PC+2 set PC=W4)
... and each (empty, do-nothing) function compiles to:
!static void func1()
!{}
0x2D0A: LNK #0x0 ; 1 cycle (set up stack frame)
! Function body goes here
0x2D0C: ULNK ; 1 cycle (un-link frame pointer)
0x2D0E: RETURN ; 3 cycles
This is a total of 11 instruction cycles of overhead for any of the cases, and they all take the same. (Note: If either the table or the functions it contains are not in the same 32K program word Flash page, there will be an even greater overhead due to having to get the Address Generation Unit to read from the correct page, or to set up the PC to make a long call.)
On the other hand, providing that the whole switch statement fits within a certain size, the compiler will generate code that does a test and relative branch as two instructions per case taking three (or possibly four) cycles per case up to the one that's true.
For example, the switch statement:
switch(state)
{
case FUNC1: state++; break;
case FUNC2: state--; break;
default: break;
}
Compiles to:
! switch(state)
0x2D2C: MOV [W14], W4 ; get state from stack-frame (not counted)
0x2D2E: SUB W4, #0x0, [W15] ; 1 cycle (compare with first case)
0x2D30: BRA Z, 0x2D38 ; 1 cycle (if branch not taken, or 2 if it is)
0x2D32: SUB W4, #0x1, [W15] ; 1 cycle (compare with second case)
0x2D34: BRA Z, 0x2D3C ; 1 cycle (if branch not taken, or 2 if it is)
! {
! case FUNC1: state++; break;
0x2D38: INC [W14], [W14] ; To stop the switch being optimised out
0x2D3A: BRA 0x2D40 ; 2 cycles (go to end of switch)
! case FUNC2: state--; break;
0x2D3C: DEC [W14], [W14] ; To stop the switch being optimised out
0x2D3E: NOP ; compiler did a fall-through (for some reason)
! default: break;
0x2D36: BRA 0x2D40 ; 2 cycles (go to end of switch)
! }
This is an overhead of 5 cycles if the first case is taken, 7 if the second case is taken, etc., meaning they break even on the fourth case.
This means that knowing your data at design time will have a significant influence on the long-term speed. If you have a significant number (more than about 4 cases) and they all occur with similar frequency then a look-up table will be quicker in the long run. If the frequency of the cases is significantly different (e.g. case 1 is more likely than case 2, which is more likely than case 3, etc.) then, if you order the switch with the most likely case first, then the switch will be faster in the long run. For the edge case when you only have a few cases the switch will (probably) be faster anyway for most executions and is more readable and less error prone.
If there are only a few cases in the switch, or some cases will occur more often than others, then doing the test and branch of the switch will probably take fewer cycles than using a look-up table. On the other hand, if you have more than a handful of cases of that occur with similar frequency then the look-up will probably end up being faster on average.
Tip: Go with the switch unless you know the look-up will definitely be faster and the time it takes to run is important.
Edit: My switch example is a little unfair, as I've ignored the original question and in-lined the 'body' of the cases to highlight the real advantage of using a switch over a look-up. If the switch has to do the call as well then it only has the advantage for the first case!
To have even more compiler outputs, here what is produced by the TI C28x compiler using #PeterCordes sample code:
_fsm_switch:
CMPB AL,#0 ; [CPU_] |62|
BF $C$L3,EQ ; [CPU_] |62|
; branchcc occurs ; [] |62|
CMPB AL,#1 ; [CPU_] |62|
BF $C$L2,EQ ; [CPU_] |62|
; branchcc occurs ; [] |62|
CMPB AL,#2 ; [CPU_] |62|
BF $C$L1,EQ ; [CPU_] |62|
; branchcc occurs ; [] |62|
CMPB AL,#3 ; [CPU_] |62|
BF $C$L4,NEQ ; [CPU_] |62|
; branchcc occurs ; [] |62|
LCR #_func3 ; [CPU_] |66|
; call occurs [#_func3] ; [] |66|
B $C$L4,UNC ; [CPU_] |66|
; branch occurs ; [] |66|
$C$L1:
LCR #_func2 ; [CPU_] |65|
; call occurs [#_func2] ; [] |65|
B $C$L4,UNC ; [CPU_] |65|
; branch occurs ; [] |65|
$C$L2:
LCR #_func1 ; [CPU_] |64|
; call occurs [#_func1] ; [] |64|
B $C$L4,UNC ; [CPU_] |64|
; branch occurs ; [] |64|
$C$L3:
LCR #_func0 ; [CPU_] |63|
; call occurs [#_func0] ; [] |63|
$C$L4:
LCR #_prevent_tailcall ; [CPU_] |69|
; call occurs [#_prevent_tailcall] ; [] |69|
LRETR ; [CPU_]
; return occurs ; []
_fsm_lut:
;* AL assigned to _state
CMPB AL,#4 ; [CPU_] |84|
BF $C$L5,HIS ; [CPU_] |84|
; branchcc occurs ; [] |84|
CLRC SXM ; [CPU_]
MOVL XAR4,#_lookUpTable ; [CPU_U] |85|
MOV ACC,AL << 1 ; [CPU_] |85|
ADDL XAR4,ACC ; [CPU_] |85|
MOVL XAR7,*+XAR4[0] ; [CPU_] |85|
LCR *XAR7 ; [CPU_] |85|
; call occurs [XAR7] ; [] |85|
$C$L5:
LCR #_prevent_tailcall ; [CPU_] |88|
; call occurs [#_prevent_tailcall] ; [] |88|
LRETR ; [CPU_]
; return occurs ; []
I also used -O2 optimizations.
We can see that the switch is not converted into a jump table even if the compiler has the ability.
Naturally, I've assumed the < and <= operators run at the same speed (per Jonathon Reinhart's logic, here). Recently, I decided to test that assumption, and I was a little surprised by my results.
I know, for most modern hardware, this question is purely academic, so had to write test programs that looped about 1 billion times (to get any minuscule difference to add up to more acceptable levels). The programs were as basic as possible (to cut out all possible sources of interference).
lt.c:
int main() {
for (int i = 0; i < 1000000001; i++);
return 0;
}
le.c:
int main() {
for (int i = 0; i <= 1000000000; i++);
return 0;
}
They were compiled and run on a Linux VirtualBox 3.19.0-18-generic #18-Ubuntu x86_64 installation, using GCC with the -std=c11 flag set.
The average time for lt.c's binary was:
real 0m2.404s
user 0m2.389s
sys 0m0.000s
The average time for le.c was:
real 0m2.397s
user 0m2.384s
sys 0m0.000s
The difference is small, but I couldn't get it to go away or reverse no matter how many times I ran the binaries.
I made the comparison value in the for-loop of lt.c one larger than le.c (so they'd both loop the same number of times). Was this somehow a mistake?
According the answer in Is < faster than <=?, < compiles to jge and <= compiles to jg. That was dealing with if statements rather than a for-loop, but could this still be the reason? Could the execution of jge take slightly longer than jg? (I think this would be ironic since that would mean moving from C to ASM inverts which one is the more complicated instruction, with lt in C translating to gte in ASM and lte to gt.)
Or, is this just so hardware specific that different x86 lines or individual chips may consistently show the reverse trend, the same trend, or no difference?
There were a few requests in the comments to my question to include the assembly being generated for me by GCC. After getting to compiler to pop out the assembly versions of each file, I checked it.
Result:
It turns out the default optimization setting turned both for-loops into the same assembly. Both files were identical in assembly-form, actually. (diff confirmed this.)
Possible reason for the previously observed time difference:
It seems the order in which I ran the binaries was the cause for the run time difference.
On a given runthrough, the programs generally were executed quicker with each successive execution, before plateauing after about 3 executions.
I alternated back and forth between time ./lt and time ./le, so the one run first would have a bias towards extra time in its average.
I usually ran lt first.
I did several separate runthroughs (increasing the averaged bias).
Code excerpt:
movl $0, -4(%rbp)
jmp .L2
.L3:
addl $1, -4($rbp)
.L2
cmpl $1000000000, -4(%rbp)
jle .L3
mol $0, %eax
pop %rbp
... * covers face * ...carry on....
Let's speak in assembly. (depends on the architecture of course)
When comparing you'll use cmp or test instruction and then
- when you use < the equal instruction would be jl which checks if SF and OF are not the same (some special flags called sign and overflow)
- when you use <= the equal instruction is jle which checks not only SF != OF but also ZF == 1 (zero flag)
and so one, more here
but honestly it's not even the whole cycle so...I think the difference is unmeasurable under normal circumstances
This question is more out of curiousity than necessity:
Is it possible to rewrite the c code if ( !boolvar ) { ... in a way so it is compiled to 1 cpu instruction?
I've tried thinking about this on a theoretical level and this is what I've come up with:
if ( !boolvar ) { ...
would need to first negate the variable and then branch depending on that -> 2 instructions (negate + branch)
if ( boolvar == false ) { ...
would need to load the value of false into a register and then branch depending on that -> 2 instructions (load + branch)
if ( boolvar != true ) { ...
would need to load the value of true into a register and then branch ("branch-if-not-equal") depending on that -> 2 instructions (load + "branch-if-not-equal")
Am I wrong with my assumptions? Is there something I'm overlooking?
I know I can produce intermediate asm versions of programs, but I wouldn't know how to use this in a way so I can on one hand turn on compiler optimization and at the same time not have an empty if statement optimized away (or have the if statement optimized together with its content, giving some non-generic answer)
P.S.: Of course I also searched google and SO for this, but with such short search terms I couldn't really find anything useful
P.P.S.: I'd be fine with a semantically equivalent version which is not syntactical equivalent, e.g. not using if.
Edit: feel free to correct me if my assumptions about the emitted asm instructions are wrong.
Edit2: I've actually learned asm about 15yrs ago, and relearned it about 5yrs ago for the alpha architecture, but I hope my question is still clear enough to figure out what I'm asking. Also, you're free to assume any kind of processor extension common in consumer cpus up to AVX2 (current haswell cpu as of the time of writing this) if it helps in finding a good answer.
At the end of my post it will say why you should not aim for this behaviour (on x86).
As Jerry Coffin has written, most jumps in x86 depend on the flags register.
There is one exception though: The j*cxz set of instructions which jump if the ecx/rcx register is zero. To achieve this you need to make sure that your boolvar uses the ecx register. You can achieve that by specifically assigning it to that register
register int boolvar asm ("ecx");
But by far not all compilers use the j*cxz set of instructions. There is a flag for icc to make it do that, but it is generally not advisable. The Intel manual states that two instructions
test ecx, ecx
jz ...
are faster on the processor.
The reason for being this is that x86 is a CISC (complex) instruction set. In the actual hardware though the processor will split up complex instructions that appear as one instruction in the asm into multiple microinstructions which are then executed in a RISC style. This is the reason why not all instructions require the same execution time and sometimes multiple small ones are faster then one big one.
test and jz are single microinstructions, but jecxz will be decomposed into those two anyways.
The only reason why the j*cxz set of instructions exist is if you want to make a conditional jump without modifying the flags register.
Yes, it's possible -- but doing so will depend on the context in which this code takes place.
Conditional branches in an x86 depend upon the values in the flags register. For this to compile down to a single instruction, some other code will already need to set the correct flag, so all that's left is a single instruction like jnz wherever.
For example:
boolvar = x == y;
if (!boolvar) {
do_something();
}
...could end up rendered as something like:
mov eax, x
cmp eax, y ; `boolvar = x == y;`
jz #f
call do_something
##:
Depending on your viewpoint, it could even compile down to only part of an instruction. For example, quite a few instructions can be "predicated", so they're executed only if some previously defined condition is true. In this case, you might have one instruction for setting "boolvar" to the correct value, followed by one to conditionally call a function, so there's no one (complete) instruction that corresponds to the if statement itself.
Although you're unlikely to see it in decently written C, a single assembly language instruction could include even more than that. For an obvious example, consider something like:
x = 10;
looptop:
-- x;
boolvar = x == 0;
if (!boolvar)
goto looptop;
This entire sequence could be compiled down to something like:
mov ecx, 10
looptop:
loop looptop
Am I wrong with my assumptions
You are wrong with several assumptions. First you should know that 1 instruction is not necessarily faster than multiple ones. For example in newer μarchs test can macro-fuse with jcc, so 2 instructions will run as one. Or a division is so slow that in the same time tens or hundreds of simpler instructions may already finished. Compiling the if block to a single instruction doesn't worth it if it's slower than multiple instructions
Besides, if ( !boolvar ) { ... doesn't need to first negate the variable and then branch depending on that. Most jumps in x86 are based on flags, and they have both the yes and no conditions, so no need to negate the value. We can simply jump on non-zero instead of jump on zero
Similarly if ( boolvar == false ) { ... doesn't need to load the value of false into a register and then branch depending on that. false is a constant equal to 0, which can be embedded as an immediate in the instruction (like cmp reg, 0). But for checking against zero then just a simple test reg, reg is enough. Then jnz or jz will be used to jump on zero/non-zero, which will be fused with the previous test instruction into one
It's possible to make an if header or body that compiles to a single instruction, but it depends entirely on what you need to do, and what condition is used. Because the flag for boolvar may already be available from the previous statement, so the if block in the next line can use it to jump directly like what you see in Jerry Coffin's answer
Moreover x86 has conditional moves, so if inside the if is a simple assignment then it may be done in 1 instruction. Below is an example and its output
int f(bool condition, int x, int y)
{
int ret = x;
if (!condition)
ret = y;
return ret;
}
f(bool, int, int):
test dil, dil ; if(!condition)
mov eax, edx ; ret = y
cmovne eax, esi ; if(condition) ret = x
ret
Some other cases you don't even need a conditional move or jump. For example
bool f(bool condition)
{
bool ret = false;
if (!condition)
ret = true;
return ret;
}
compiles to a single xor without any jump at all
f(bool):
mov eax, edi
xor eax, 1
ret
ARM architecture (v7 and below) can run any instruction as conditional so that may translate to only one instruction
For example the following loop
while (i != j)
{
if (i > j)
{
i -= j;
}
else
{
j -= i;
}
}
can be translated to ARM assembly as
loop: CMP Ri, Rj ; set condition "NE" if (i != j),
; "GT" if (i > j),
; or "LT" if (i < j)
SUBGT Ri, Ri, Rj ; if "GT" (Greater Than), i = i-j;
SUBLT Rj, Rj, Ri ; if "LT" (Less Than), j = j-i;
BNE loop ; if "NE" (Not Equal), then loop
So, just for fun, and out of curiosity, I wanted to see what executes faster when doing an even-odd check, modulus or bitwise comparisons.
So, I whipped up the following, but I'm not sure that it's behaving correctly, as the difference is so small. I read somewhere online that bitwise should be an order of magnitude faster than modulus checking.
Is it possible that it's getting optimized away? I've just started tinkering with assembly, otherwise I'd attempt to dissect the executable a bit.
EDIT 3: Here is a working test, thanks in a large way to #phonetagger:
#include <stdio.h>
#include <time.h>
#include <stdint.h>
// to reset the global
static const int SEED = 0x2A;
// 5B iterations, each
static const int64_t LOOPS = 5000000000;
int64_t globalVar;
// gotta call something
int64_t doSomething( int64_t input )
{
return 1 + input;
}
int main(int argc, char *argv[])
{
globalVar = SEED;
// mod
clock_t startMod = clock();
for( int64_t i=0; i<LOOPS; ++i )
{
if( ( i % globalVar ) == 0 )
{
globalVar = doSomething(globalVar);
}
}
clock_t endMod = clock();
double modTime = (double)(endMod - startMod) / CLOCKS_PER_SEC;
globalVar = SEED;
// bit
clock_t startBit = clock();
for( int64_t j=0; j<LOOPS; ++j )
{
if( ( j & globalVar ) == 0 )
{
globalVar = doSomething(globalVar);
}
}
clock_t endBit = clock();
double bitTime = (double)(endBit - startBit) / CLOCKS_PER_SEC;
printf("Mod: %lf\n", modTime);
printf("Bit: %lf\n", bitTime);
printf("Dif: %lf\n", ( modTime > bitTime ? modTime-bitTime : bitTime-modTime ));
}
5 billion iterations of each loop, with a global removing compiler optimization yields the following:
Mod: 93.099101
Bit: 16.701401
Dif: 76.397700
gcc foo.c -std=c99 -S -O0 (note, I specifically did -O0) for x86 gave me the same assembly for both loops. Operator strength reduction meant that both ifs used an andl to get the job done (which is faster than a modulo on Intel machines):
First Loop:
.L6:
movl 72(%esp), %eax
andl $1, %eax
testl %eax, %eax
jne .L5
call doNothing
.L5:
addl $1, 72(%esp)
.L4:
movl LOOPS, %eax
cmpl %eax, 72(%esp)
jl .L6
Second Loop:
.L9:
movl 76(%esp), %eax
andl $1, %eax
testl %eax, %eax
jne .L8
call doNothing
.L8:
addl $1, 76(%esp)
.L7:
movl LOOPS, %eax
cmpl %eax, 76(%esp)
jl .L9
The miniscule difference you see is probably because of the resolution/inaccuracy of clock.
Most compilers will compile both of the following to EXACTLY the same machine instruction(s):
if( ( i % 2 ) == 0 )
if( ( i & 1 ) == 0 )
...even without ANY "optimization" turned on. The reason is that you are MOD-ing and AND-ing with constant values, and a %2 operation is, as any compiler writer should know, functionally equivalent to an &1 operation. In fact, MOD by any power-of-2 has an equivalent AND operation. If you really want to test the difference, you'll need to make the right-hand-side of both operations be variable, and to be absolutely sure the compiler's cleverness isn't thwarting your efforts, you'll need to bury the variables' initializations somewhere that the compiler can't tell at that point what its runtime value will be; i.e. you'll need to pass the values into a GLOBALLY-DECLARED (i.e. not 'static') test function as parameters, in which case the compiler can't trace back to their definition & substitute the variables with constants, because theoretically any external caller could pass any values in for those parameters. Alternatively, you could leave the code in main() and define the variables globally, in which case the compiler can't substitute them with constants because it can't know for sure that another function may have altered the value of the global variables.
Incidentally, this same issue exists for divide operations.... Divisions by constant powers-of-two can be substituted with an equivalent right-shift (>>) operation. The same trick works for multiplication (<<), but the benefits are less (or nonexistant) for multiplications. True division operations just take a long time in hardware, though significant improvements have been made in most modern processors vs. even 15 years ago, division operations still take maybe 80 clock cycles, while a >> operation takes only a single cycle. You're not going to see an "order of magnitude" improvement using bitwise tricks on modern processors, but most compilers will still use those tricks because there is still some noticeable improvement.
EDIT: On some embedded processors (and, unbelievable though it was, the original Sparc desktop/workstation processor versions before v8), there isn't even a divide instruction at all. All true divide & mod operations on such processors must be performed entirely in software, which can be a monstrously expensive operation. In that sort of environment, you surely would see an order of magnitude difference.
Bitwise checking takes only a single machine instruction ("and ...,0x01"); that's pretty hard to beat.
Modulo check will absolutely be slower if you have a dumb compiler that actually computes modulo by taking remainders (sometimes including a subroutine call to modulo routine!). Smart compilers know about the modulo function and generate code for it directly; if they have any decent optimization they know that "modulo(x,2)" can be implemented with the same AND trick above.
Our PARLANSE compiler does this as a matter of course. I'd be surprised if widely available C and C++ compilers don't do this too.
With such "good" compilers, it won't matter which way you write odd/even (or even "is power of two") checks; it will be pretty damn fast.
I have this code (my strlen function)
size_t slen(const char *str)
{
size_t len = 0;
while (*str)
{
len++;
str++;
}
return len;
}
Doing while (*str++), as shown below, the program execution time is much larger:
while (*str++)
{
len++;
}
I'm doing this to probe the code
int main()
{
double i = 11002110;
const char str[] = "long string here blablablablablablablabla"
while (i--)
slen(str);
return 0;
}
In first case the execution time is around 6.7 seconds, while in the second (using *str++), the time is around 10 seconds!
Why so much difference?
Probably because the post-increment operator (used in the condition of the while statement) involves keeping a temporary copy of the variable with its old value.
What while (*str++) really means is:
while (tmp = *str, ++str, tmp)
...
By contrast, when you write str++; as a single statement in the body of the while loop, it is in a void context, hence the old value isn't fetched because it's not needed.
To summarise, in the *str++ case you have an assignment, 2 increments, and a jump in each iteration of the loop. In the other case you only have 2 increments and a jump.
Trying this out on ideone.com, I get about 0.5s execution with *str++ here. Without, it takes just over a second (here). Using *str++ was faster. Perhaps with optimisation on *str++ can be done more efficiently.
This depends on your compiler, compiler flags, and your architecture. With Apple's LLVM gcc 4.2.1, I don't get a noticeable change in performance between the two versions, and there really shouldn't be. A good compiler would turn the *str version into something like
IA-32 (AT&T Syntax):
slen:
pushl %ebp # Save old frame pointer
movl %esp, %ebp # Initialize new frame pointer
movl -4(%ebp), %ecx # Load str into %ecx
xor %eax, %eax # Zero out %eax to hold len
loop:
cmpb (%ecx), $0 # Compare *str to 0
je done # If *str is NUL, finish
incl %eax # len++
incl %ecx # str++
j loop # Goto next iteration
done:
popl %ebp # Restore old frame pointer
ret # Return
The *str++ version could be compiled exactly the same (since changes to str aren't visible outside slen, when the increment actually occurs isn't important), or the body of the loop could be:
loop:
incl %ecx # str++
cmpb -1(%ecx), $0 # Compare *str to 0
je done # If *str is NUL, finish
incl %eax # len++
j loop # Goto next iteration
Others have already provided some excellent commentary, including analysis for the generated assembly code. I strongly recommend that you read them carefully. As they have pointed out this sort of question can't really be answered without some quantification, so let's and play with it a bit.
First, we're going to need a program. Our plan is this: we will generate strings whose lengths are powers of two, and try all functions in turn. We run through once to prime the cache and then separately time 4096 iterations using the highest-resolution available to us. Once we are done, we will calculate some basic statistics: min, max and the simple-moving average and dump it. We can then do some rudimentary analysis.
In addition to the two algorithms you've already shown, I will show a third option which doesn't involve the use of a counter at all, relying instead on a subtraction, and I'll mix things up by throwing in std::strlen, just to see what happens. It'll be an interesting throwdown.
Through the magic of television our little program is already written, so we compile it with gcc -std=c++11 -O3 speed.c and we get cranking producing some data. I've done two separate graphs, one for strings whose size is from 32 to 8192 bytes and another for strings whose size is from 16384 all the way to 1048576 bytes long. In the following graphs, the Y axis is the time consumed in nanoseconds and the X axis shows the length of the string in bytes.
Without further ado, let's look at performance for "small" strings from 32 to 8192 bytes:
Now this is interesting. Not only is the std::strlen function outperforming everything across the board, it's doing it with gusto too since it's performance is a lot of more stable.
Will the situation change if we look at larger strings, from 16384 all the way to 1048576 bytes long?
Sort of. The difference is becoming even more pronounced. As our custom-written functions huff-and-puff, std::strlen continues to perform admirably.
An interesting observation to make is that you can't necessarily translate number of C++ instructions (or even, number of assembly instructions) to performance, since functions whose bodies consist of fewer instructions sometimes take longer to execute.
An even more interesting -- and important observation is to notice just how well the str::strlen function performs.
So what does all this get us?
First conclusion: don't reinvent the wheel. Use the standard functions available to you. Not only are they already written, but they are very very heavily optimized and will almost certainly outperform anything you can write unless you're Agner Fog.
Second conclusion: unless you have hard data from a profiler that a particular section of code or function is hot-spot in your application, don't bother optimizing code. Programmers are notoriously bad at detecting hot-spots by looking at high level function.
Third conclusion: prefer algorithmic optimizations in order to improve your code's performance. Put your mind to work and let the compiler shuffle bits around.
Your original question was: "why is function slen2 slower than slen1?" I could say that it isn't easy to answer without a lot more information, and even then it might be a lot longer and more involved than you care for. Instead what I'll say is this:
Who cares why? Why are you even bothering with this? Use std::strlen - which is better than anything that you can rig up - and move on to solving more important problems - because I'm sure that this isn't the biggest problem in your application.