COMISD not comparing properly [duplicate] - c

As part of a compiler project I have to write GNU assembler code for x86 to compare floating point values. I have tried to find resources on how to do this online and from what I understand it works like this:
Assuming the two values I want to compare are the only values on the floating point stack, then the fcomi instruction will compare the values and set the CPU-flags so that the je, jne, jl, ... instructions can be used.
I'm asking because this only works sometimes. For example:
.section .data
msg: .ascii "Hallo\n\0"
f1: .float 10.0
f2: .float 9.0
.globl main
.type main, #function
main:
flds f1
flds f2
fcomi
jg leb
pushl $msg
call printf
addl $4, %esp
leb:
pushl $0
call exit
will not print "Hallo" even though I think it should, and if you switch f1 and f2 it still won't which is a logical contradiction. je and jne however seem to work fine.
What am I doing wrong?
PS: does the fcomip pop only one value or does it pop both?

TL:DR: Use above / below conditions (like for unsigned integer) to test the result of compares.
For various historical reasons (mapping from FP status word to FLAGS via fcom / fstsw / sahf which fcomi (new in PPro) matches), FP compares set CF, not OF / SF. See also http://www.ray.masmcode.com/tutorial/fpuchap7.htm
Modern SSE/SSE2 scalar compares into FLAGS follow this as well, with [u]comiss / sd. (Unlike SIMD compares, which have a predicate as part of the instruction, as an immediate, since they only produce a single all-zeros / all-ones result for each element, not a set of FLAGS.)
This is all coming from Volume 2 of Intel 64 and IA-32 Architectures Software Developer's Manuals.
FCOMI sets only some of the flags that CMP does. Your code has %st(0) == 9 and %st(1) == 10. (Since it's a stack they're loaded onto), referring to the table on page 3-348 in Volume 2A you can see that this is the case "ST0 < ST(i)", so it will clear ZF and PF and set CF. Meanwhile on pg. 3-544 Vol. 2A you can read that JG means "Jump short if greater (ZF=0 and SF=OF)". In other words it's testing the sign, overflow and zero flags, but FCOMI doesn't set sign or overflow!
Depending on which conditions you wish to jump, you should look at the possible comparison results and decide when you want to jump.
+--------------------+---+---+---+
| Comparison results | Z | P | C |
+--------------------+---+---+---+
| ST0 > ST(i) | 0 | 0 | 0 |
| ST0 < ST(i) | 0 | 0 | 1 |
| ST0 = ST(i) | 1 | 0 | 0 |
| unordered | 1 | 1 | 1 | one or both operands were NaN.
+--------------------+---+---+---+
I've made this small table to make it easier to figure out:
+--------------+---+---+-----+------------------------------------+
| Test | Z | C | Jcc | Notes |
+--------------+---+---+-----+------------------------------------+
| ST0 < ST(i) | X | 1 | JB | ZF will never be set when CF = 1 |
| ST0 <= ST(i) | 1 | 1 | JBE | Either ZF or CF is ok |
| ST0 == ST(i) | 1 | X | JE | CF will never be set in this case |
| ST0 != ST(i) | 0 | X | JNE | |
| ST0 >= ST(i) | X | 0 | JAE | As long as CF is clear we are good |
| ST0 > ST(i) | 0 | 0 | JA | Both CF and ZF must be clear |
+--------------+---+---+-----+------------------------------------+
Legend: X: don't care, 0: clear, 1: set
In other words the condition codes match those for using unsigned comparisons. The same goes if you're using FMOVcc.
If either (or both) operand to fcomi is NaN, it sets ZF=1 PF=1 CF=1. (FP compares have 4 possible results: >, <, ==, or unordered). If you care what your code does with NaNs, you may need an extra jp or jnp. But not always: for example, ja is only true if CF=0 and ZF=0, so it will be not-taken in the unordered case. If you want the unordered case to take the same execution path as below or equal, then ja is all you need.
Here you should use JA if you want it to print (ie. if (!(f2 > f1)) { puts("hello"); }) and JBE if you don't (corresponds to if (!(f2 <= f1)) { puts("hello"); }). (Note this might be a little confusing due to the fact that we only print if we don't jump).
Regarding your second question: by default fcomi doesn't pop anything. You want its close cousin fcomip which pops %st0. You should always clear the fpu register stack after usage, so all in all your program ends up like this assuming you want the message printed:
.section .rodata
msg: .ascii "Hallo\n\0"
f1: .float 10.0
f2: .float 9.0
.globl main
.type main, #function
main:
flds f1
flds f2
fcomip
fstp %st(0) # to clear stack
ja leb # won't jump, jbe will
pushl $msg
call printf
addl $4, %esp
leb:
pushl $0
call exit

Related

Understanding function prologue with multiple function calls [duplicate]

This question already has answers here:
Why is there no "sub rsp" instruction in this function prologue and why are function parameters stored at negative rbp offsets?
(2 answers)
Why does the x86-64 GCC function prologue allocate less stack than the local variables?
(1 answer)
Closed 2 years ago.
Let's take the following example I have from a single function:
first_function:
pushq %rbp
movq %rsp, %rbp
movq $2, -8(%rbp)
movq $4, -16(%rbp)
...
pop %rbp
ret
If we look at the stack before the ..., it gives us:
>>> x/4g $rbp-16
0x7fffffffe410: 0x0000000000000004 0x0000000000000002
0x7fffffffe420: 0x0000000000000000 0x00000000004000bd
Or for me, an easier way to visualize it is:
+----------------+--------------------+---------------------------+
| 0x7fffffffe420 | 0x00000000004000bd | # function return address |
+----------------+--------------------+---------------------------+
| 0x7fffffffe418 | 0x0000000000000000 | # from push %rbp |
+----------------+--------------------+---------------------------+
| 0x7fffffffe410 | 0x0000000000000002 | # from mov $2, -8(%rbp) |
+----------------+--------------------+---------------------------+
| 0x7fffffffe408 | 0x0000000000000004 | # from mov $4, -16(%rbp) |
+----------------+--------------------+--------------------------
My question then is wouldn't a sub-function call (for example, if I called another function call in the ... section) possibly clobber all the two variables I've added above (2, and 4)?

x86 using loops in order to minimize code (lining) [duplicate]

I'm having trouble understanding registers in x86 Assembly, I know that EAX is the full 32 bits, AX is the lower 16 bits, and then AH and AL the higher and lower 8 bits of AX, But I'm doing a question.
If AL=10 and AH=10 what is the value in AX?
My thinking on this is to convert 10 into binary (1010) and then take that as the higher and lower bits of AX (0000 1010 0000 1010) and then converting this to decimal (2570) am I anywhere close to the right answer here, or way off?
As suggested by Peter Cordes, I would imagine the data as hexadecimal values:
RR RR RR RR EE EE HH LL
| | || ||
| | || AL
| | AH |
| | |___|
| | AX |
| |_________|
| EAX |
|_____________________|
RAX
...where RAX is the 64-bit register which exists in x86-64.
So if you had AH = 0x12 and AL = 0x34, like this:
00 00 00 00 00 00 12 34
| | || ||
| | || AL
| | AH |
| | |___|
| | AX |
| |_________|
| EAX |
|_____________________|
RAX
...then you had AX = 0x1234 and EAX = 0x00001234 etc.
Note that, as shown in this chart, AH is the only "weird" register here which is not aligned with the lower bits. The others (AL, AX, EAX, RAX for 64-bit) are just different sizes but all aligned on the right. (For example, the two bytes marked EE EE in the chart don't have a register name on their own.)
Writing AL, AH, or AX merge into the full RAX, leaving other bytes unmodified for historical reasons. (Prefer a movzx eax, byte [mem] or movzx eax, word [mem] load if you don't specifically want this merging: Why doesn't GCC use partial registers?)
Writing EAX zero-extends into RAX. (Why do x86-64 instructions on 32-bit registers zero the upper part of the full 64-bit register?)

Efficiently dividing unsigned value by a power of two, rounding up

I want to implement unsigneda integer division by an arbitrary power of two, rounding up, efficiently. So what I want, mathematically, is ceiling(p/q)0. In C, the strawman implementation, which doesn't take advantage of the restricted domain of q could something like the following function1:
/** q must be a power of 2, although this version works for any q */
uint64_t divide(uint64_t p, uint64_t q) {
uint64_t res = p / q;
return p % q == 0 ? res : res + 1;
}
... of course, I don't actually want to use division or mod at the machine level, since that takes many cycles even on modern hardware. I'm looking for a strength reduction that uses shifts and/or some other cheap operation(s) - taking advantage of the fact that q is a power of 2.
You can assume we have an efficient lg(unsigned int x) function, which returns the base-2 log of x, if x is a power-of-two.
Undefined behavior is fine if q is zero.
Please note that the simple solution: (p+q-1) >> lg(q) doesn't work in general - try it with p == 2^64-100 and q == 2562 for example.
Platform Details
I'm interested in solutions in C, that are likely to perform well across a variety of platforms, but for the sake of concreteness, awarding the bounty and because any definitive discussion of performance needs to include a target architecture, I'll be specific about how I'll test them:
Skylake CPU
gcc 5.4.0 with compile flags -O3 -march=haswell
Using gcc builtins (such as bitscan/leading zero builtins) is fine, and in particular I've implemented the lg() function I said was available as follows:
inline uint64_t lg(uint64_t x) {
return 63U - (uint64_t)__builtin_clzl(x);
}
inline uint32_t lg32(uint32_t x) {
return 31U - (uint32_t)__builtin_clz(x);
}
I verified that these compile down to a single bsr instruction, at least with -march=haswell, despite the apparent involvement of a subtraction. You are of course free to ignore these and use whatever other builtins you want in your solution.
Benchmark
I wrote a benchmark for the existing answers, and will share and update the results as changes are made.
Writing a good benchmark for a small, potentially inlined operation is quite tough. When code is inlined into a call site, a lot of the work of the function may disappear, especially when it's in a loop3.
You could simply avoid the whole inlining problem by ensuring your code isn't inlined: declare it in another compilation unit. I tried to that with the bench binary, but really the results are fairly pointless. Nearly all implementations tied at 4 or 5 cycles per call, but even a dummy method that does nothing other than return 0 takes the same time. So you are mostly just measuring the call + ret overhead. Furthermore, you are almost never really going to use the functions like this - unless you messed up, they'll be available for inlining and that changes everything.
So the two benchmarks I'll focus the most on repeatedly call the method under test in a loop, allowing inlining, cross-function optmization, loop hoisting and even vectorization.
There are two overall benchmark types: latency and throughput. The key difference is that in the latency benchmark, each call to divide is dependent on the previous call, so in general calls cannot be easily overlapped4:
uint32_t bench_divide_latency(uint32_t p, uint32_t q) {
uint32_t total = p;
for (unsigned i=0; i < ITERS; i++) {
total += divide_algo(total, q);
q = rotl1(q);
}
return total;
}
Note that the running total depends so on the output of each divide call, and that it is also an input to the divide call.
The throughput variant, on the other hand, doesn't feed the output of one divide into the subsequent one. This allows work from one call to be overlapped with a subsequent one (both by the compiler, but especially the CPU), and even allows vectorization:
uint32_t bench_divide_throughput(uint32_t p, uint32_t q) {
uint32_t total = p;
for (unsigned i=0; i < ITERS; i++) {
total += fname(i, q);
q = rotl1(q);
}
return total;
}
Note that here we feed in the loop counter as the the dividend - this is variable, but it doesn't depend on the previous divide call.
Furthermore, each benchmark has three flavors of behavior for the divisor, q:
Compile-time constant divisor. For example, a call to divide(p, 8). This is common in practice, and the code can be much simpler when the divisor is known at compile time.
Invariant divisor. Here the divisor is not know at compile time, but is constant for the whole benchmarking loop. This allows a subset of the optimizations that the compile-time constant does.
Variable divisor. The divisor changes on each iteration of the loop. The benchmark functions above show this variant, using a "rotate left 1" instruction to vary the divisor.
Combining everything you get a total of 6 distinct benchmarks.
Results
Overall
For the purposes of picking an overall best algorithm, I looked at each of 12 subsets for the proposed algorithms: (latency, throughput) x (constant a, invariant q, variable q) x (32-bit, 64-bit) and assigned a score of 2, 1, or 0 per subtest as follows:
The best algorithm(s) (within 5% tolerance) receive a score of 2.
The "close enough" algorithms (no more than 50% slower than the best) receive a score of 1.
The remaining algorithms score zero.
Hence, the maximum total score is 24, but no algorithm achieved that. Here are the overall total results:
╔═══════════════════════╦═══════╗
║ Algorithm ║ Score ║
╠═══════════════════════╬═══════╣
║ divide_user23_variant ║ 20 ║
║ divide_chux ║ 20 ║
║ divide_user23 ║ 15 ║
║ divide_peter ║ 14 ║
║ divide_chrisdodd ║ 12 ║
║ stoke32 ║ 11 ║
║ divide_chris ║ 0 ║
║ divide_weather ║ 0 ║
╚═══════════════════════╩═══════╝
So the for the purposes of this specific test code, with this specific compiler and on this platform, user2357112 "variant" (with ... + (p & mask) != 0) performs best, tied with chux's suggestion (which is in fact identical code).
Here are all the sub-scores which sum to the above:
╔══════════════════════════╦═══════╦════╦════╦════╦════╦════╦════╗
║ ║ Total ║ LC ║ LI ║ LV ║ TC ║ TI ║ TV ║
╠══════════════════════════╬═══════╬════╬════╬════╬════╬════╬════╣
║ divide_peter ║ 6 ║ 1 ║ 1 ║ 1 ║ 1 ║ 1 ║ 1 ║
║ stoke32 ║ 6 ║ 1 ║ 1 ║ 2 ║ 0 ║ 0 ║ 2 ║
║ divide_chux ║ 10 ║ 2 ║ 2 ║ 2 ║ 1 ║ 2 ║ 1 ║
║ divide_user23 ║ 8 ║ 1 ║ 1 ║ 2 ║ 2 ║ 1 ║ 1 ║
║ divide_user23_variant ║ 10 ║ 2 ║ 2 ║ 2 ║ 1 ║ 2 ║ 1 ║
║ divide_chrisdodd ║ 6 ║ 1 ║ 1 ║ 2 ║ 0 ║ 0 ║ 2 ║
║ divide_chris ║ 0 ║ 0 ║ 0 ║ 0 ║ 0 ║ 0 ║ 0 ║
║ divide_weather ║ 0 ║ 0 ║ 0 ║ 0 ║ 0 ║ 0 ║ 0 ║
║ ║ ║ ║ ║ ║ ║ ║ ║
║ 64-bit Algorithm ║ ║ ║ ║ ║ ║ ║ ║
║ divide_peter_64 ║ 8 ║ 1 ║ 1 ║ 1 ║ 2 ║ 2 ║ 1 ║
║ div_stoke_64 ║ 5 ║ 1 ║ 1 ║ 2 ║ 0 ║ 0 ║ 1 ║
║ divide_chux_64 ║ 10 ║ 2 ║ 2 ║ 2 ║ 1 ║ 2 ║ 1 ║
║ divide_user23_64 ║ 7 ║ 1 ║ 1 ║ 2 ║ 1 ║ 1 ║ 1 ║
║ divide_user23_variant_64 ║ 10 ║ 2 ║ 2 ║ 2 ║ 1 ║ 2 ║ 1 ║
║ divide_chrisdodd_64 ║ 6 ║ 1 ║ 1 ║ 2 ║ 0 ║ 0 ║ 2 ║
║ divide_chris_64 ║ 0 ║ 0 ║ 0 ║ 0 ║ 0 ║ 0 ║ 0 ║
║ divide_weather_64 ║ 0 ║ 0 ║ 0 ║ 0 ║ 0 ║ 0 ║ 0 ║
╚══════════════════════════╩═══════╩════╩════╩════╩════╩════╩════╝
Here, each test is named like XY, with X in {Latency, Throughput} and Y in {Constant Q, Invariant Q, Variable Q}. So for example, LC is "Latency test with constant q".
Analysis
At the highest level, the solutions can be roughly divided into two categories: fast (the top 6 finishers) and slow (the bottom two). The difference is larger: all of the fast algorithms were the fastest on at least two subtests and in general when they didn't finish first they fell into the "close enough" category (they only exceptions being failed vectorizations in the case of stoke and chrisdodd). The slow algorithms however scored 0 (not even close) on every test. So you can mostly eliminate the slow algorithms from further consideration.
Auto-vectorization
Among the fast algorithms, a large differentiator was the ability to auto-vectorize.
None of the algorithms were able to auto-vectorize in the latency tests, which makes sense since the latency tests are designed to feed their result directly into the next iteration. So you can really only calculate results in a serial fashion.
For the throughput tests, however, many algorithms were able to auto-vectorize for the constant Q and invariant Q case. In both of these tests tests the divisor q is loop-invariant (and in the former case it is a compile-time constant). The dividend is the loop counter, so it is variable, but predicable (and in particular a vector of dividends can be trivially calculated by adding 8 to the previous input vector: [0, 1, 2, ..., 7] + [8, 8, ..., 8] == [8, 9, 10, ..., 15]).
In this scenario, gcc was able to vectorize peter, stoke, chux, user23 and user23_variant. It wasn't able to vectorize chrisdodd for some reason, likely because it included a branch (but conditionals don't strictly prevent vectorization since many other solutions have conditional elements but still vectorized). The impact was huge: algorithms that vectorized showed about an 8x improvement in throughput over variants that didn't but were otherwise fast.
Vectorization isn't free, though! Here are the function sizes for the "constant" variant of each function, with the Vec? column showing whether a function vectorized or not:
Size Vec? Name
045 N bench_c_div_stoke_64
049 N bench_c_divide_chrisdodd_64
059 N bench_c_stoke32_64
212 Y bench_c_divide_chux_64
227 Y bench_c_divide_peter_64
220 Y bench_c_divide_user23_64
212 Y bench_c_divide_user23_variant_64
The trend is clear - vectorized functions take about 4x the size of the non-vectorized ones. This is both because the core loops themselves are larger (vector instructions tend to be larger and there are more of them), and because loop setup and especially the post-loop code is much larger: for example, the vectorized version requires a reduction to sum all the partial sums in a vector. The loop count is fixed and a multiple of 8, so no tail code is generated - but if were variable the generated code would be even larger.
Furthermore, despite the large improvement in runtime, gcc's vectorization is actually poor. Here's an excerpt from the vectorized version of Peter's routine:
on entry: ymm4 == all zeros
on entry: ymm5 == 0x00000001 0x00000001 0x00000001 ...
4007a4: c5 ed 76 c4 vpcmpeqd ymm0,ymm2,ymm4
4007ad: c5 fd df c5 vpandn ymm0,ymm0,ymm5
4007b1: c5 dd fa c0 vpsubd ymm0,ymm4,ymm0
4007b5: c5 f5 db c0 vpand ymm0,ymm1,ymm0
This chunk works independently on 8 DWORD elements originating in ymm2. If we take x to be a single DWORD element of ymm2, and y the incoming value of ymm1 these foud instructions correspond to:
x == 0 x != 0
x = x ? 0 : -1; // -1 0
x = x & 1; // 1 0
x = 0 - x; // -1 0
x = y1 & x; // y1 0
So the first three instructions could simple be replaced by the first one, as the states are identical in either case. So that's two cycles added to that dependency chain (which isn't loop carried) and two extra uops. Evidently gcc's optimization phases somehow interact poorly with the vectorization code here, since such trivial optimizations are rarely missed in scalar code. Examining the other vectorized versions similarly shows a lot of performance dropped on the floor.
Branches vs Branch-free
Nearly all of the solutions compiled to branch-free code, even if C code had conditionals or explicit branches. The conditional portions were small enough that the compiler generally decided to use conditional move or some variant. One exception is chrisdodd which compiled with a branch (checking if p == 0) in all the throughput tests, but none of the latency ones. Here's a typical example from the constant q throughput test:
0000000000400e60 <bench_c_divide_chrisdodd_32>:
400e60: 89 f8 mov eax,edi
400e62: ba 01 00 00 00 mov edx,0x1
400e67: eb 0a jmp 400e73 <bench_c_divide_chrisdodd_32+0x13>
400e69: 0f 1f 80 00 00 00 00 nop DWORD PTR [rax+0x0]
400e70: 83 c2 01 add edx,0x1
400e73: 83 fa 01 cmp edx,0x1
400e76: 74 f8 je 400e70 <bench_c_divide_chrisdodd_32+0x10>
400e78: 8d 4a fe lea ecx,[rdx-0x2]
400e7b: c1 e9 03 shr ecx,0x3
400e7e: 8d 44 08 01 lea eax,[rax+rcx*1+0x1]
400e82: 81 fa 00 ca 9a 3b cmp edx,0x3b9aca00
400e88: 75 e6 jne 400e70 <bench_c_divide_chrisdodd_32+0x10>
400e8a: c3 ret
400e8b: 0f 1f 44 00 00 nop DWORD PTR [rax+rax*1+0x0]
The branch at 400e76 skips the case that p == 0. In fact, the compiler could have just peeled the first iteration out (calculating its result explicitly) and then avoided the jump entirely since after that it can prove that p != 0. In these tests, the branch is perfectly predictable, which could give an advantage to code that actually compiles using a branch (since the compare & branch code is essentially out of line and close to free), and is a big part of why chrisdodd wins the throughput, variable q case.
Detailed Test Results
Here you can find some detailed test results and some details on the tests themselves.
Latency
The results below test each algorithm over 1e9 iterations. Cycles are calculated simply by multiplying the time/call by the clock frequency. You can generally assume that something like 4.01 is the same as 4.00, but the larger deviations like 5.11 seem to be real and reproducible.
The results for divide_plusq_32 use (p + q - 1) >> lg(q) but are only shown for reference, since this function fails for large p + q. The results for dummy are a very simple function: return p + q, and lets you estimate the benchmark overhead5 (the addition itself should take a cycle at most).
==============================
Bench: Compile-time constant Q
==============================
Function ns/call cycles
divide_peter_32 2.19 5.67
divide_peter_64 2.18 5.64
stoke32_32 1.93 5.00
stoke32_64 1.97 5.09
stoke_mul_32 2.75 7.13
stoke_mul_64 2.34 6.06
div_stoke_32 1.94 5.03
div_stoke_64 1.94 5.03
divide_chux_32 1.55 4.01
divide_chux_64 1.55 4.01
divide_user23_32 1.97 5.11
divide_user23_64 1.93 5.00
divide_user23_variant_32 1.55 4.01
divide_user23_variant_64 1.55 4.01
divide_chrisdodd_32 1.95 5.04
divide_chrisdodd_64 1.93 5.00
divide_chris_32 4.63 11.99
divide_chris_64 4.52 11.72
divide_weather_32 2.72 7.04
divide_weather_64 2.78 7.20
divide_plusq_32 1.16 3.00
divide_plusq_64 1.16 3.00
divide_dummy_32 1.16 3.00
divide_dummy_64 1.16 3.00
==============================
Bench: Invariant Q
==============================
Function ns/call cycles
divide_peter_32 2.19 5.67
divide_peter_64 2.18 5.65
stoke32_32 1.93 5.00
stoke32_64 1.93 5.00
stoke_mul_32 2.73 7.08
stoke_mul_64 2.34 6.06
div_stoke_32 1.93 5.00
div_stoke_64 1.93 5.00
divide_chux_32 1.55 4.02
divide_chux_64 1.55 4.02
divide_user23_32 1.95 5.05
divide_user23_64 2.00 5.17
divide_user23_variant_32 1.55 4.02
divide_user23_variant_64 1.55 4.02
divide_chrisdodd_32 1.95 5.04
divide_chrisdodd_64 1.93 4.99
divide_chris_32 4.60 11.91
divide_chris_64 4.58 11.85
divide_weather_32 12.54 32.49
divide_weather_64 17.51 45.35
divide_plusq_32 1.16 3.00
divide_plusq_64 1.16 3.00
divide_dummy_32 0.39 1.00
divide_dummy_64 0.39 1.00
==============================
Bench: Variable Q
==============================
Function ns/call cycles
divide_peter_32 2.31 5.98
divide_peter_64 2.26 5.86
stoke32_32 2.06 5.33
stoke32_64 1.99 5.16
stoke_mul_32 2.73 7.06
stoke_mul_64 2.32 6.00
div_stoke_32 2.00 5.19
div_stoke_64 2.00 5.19
divide_chux_32 2.04 5.28
divide_chux_64 2.05 5.30
divide_user23_32 2.05 5.30
divide_user23_64 2.06 5.33
divide_user23_variant_32 2.04 5.29
divide_user23_variant_64 2.05 5.30
divide_chrisdodd_32 2.04 5.30
divide_chrisdodd_64 2.05 5.31
divide_chris_32 4.65 12.04
divide_chris_64 4.64 12.01
divide_weather_32 12.46 32.28
divide_weather_64 19.46 50.40
divide_plusq_32 1.93 5.00
divide_plusq_64 1.99 5.16
divide_dummy_32 0.40 1.05
divide_dummy_64 0.40 1.04
Throughput
Here are the results for the throughput tests. Note that many of the algorithms here were auto-vectorized, so the performance is relatively very good for those: a fraction of a cycle in many cases. One result is that unlike most latency results, the 64-bit functions are considerably slower, since vectorization is more effective with smaller element sizes (although the gap is larger that I would have expected).
==============================
Bench: Compile-time constant Q
==============================
Function ns/call cycles
stoke32_32 0.39 1.00
divide_chux_32 0.15 0.39
divide_chux_64 0.53 1.37
divide_user23_32 0.14 0.36
divide_user23_64 0.53 1.37
divide_user23_variant_32 0.15 0.39
divide_user23_variant_64 0.53 1.37
divide_chrisdodd_32 1.16 3.00
divide_chrisdodd_64 1.16 3.00
divide_chris_32 4.34 11.23
divide_chris_64 4.34 11.24
divide_weather_32 1.35 3.50
divide_weather_64 1.35 3.50
divide_plusq_32 0.10 0.26
divide_plusq_64 0.39 1.00
divide_dummy_32 0.08 0.20
divide_dummy_64 0.39 1.00
==============================
Bench: Invariant Q
==============================
Function ns/call cycles
stoke32_32 0.48 1.25
divide_chux_32 0.15 0.39
divide_chux_64 0.48 1.25
divide_user23_32 0.17 0.43
divide_user23_64 0.58 1.50
divide_user23_variant_32 0.15 0.38
divide_user23_variant_64 0.48 1.25
divide_chrisdodd_32 1.16 3.00
divide_chrisdodd_64 1.16 3.00
divide_chris_32 4.35 11.26
divide_chris_64 4.36 11.28
divide_weather_32 5.79 14.99
divide_weather_64 17.00 44.02
divide_plusq_32 0.12 0.31
divide_plusq_64 0.48 1.25
divide_dummy_32 0.09 0.23
divide_dummy_64 0.09 0.23
==============================
Bench: Variable Q
==============================
Function ns/call cycles
stoke32_32 1.16 3.00
divide_chux_32 1.36 3.51
divide_chux_64 1.35 3.50
divide_user23_32 1.54 4.00
divide_user23_64 1.54 4.00
divide_user23_variant_32 1.36 3.51
divide_user23_variant_64 1.55 4.01
divide_chrisdodd_32 1.16 3.00
divide_chrisdodd_64 1.16 3.00
divide_chris_32 4.02 10.41
divide_chris_64 3.84 9.95
divide_weather_32 5.40 13.98
divide_weather_64 19.04 49.30
divide_plusq_32 1.03 2.66
divide_plusq_64 1.03 2.68
divide_dummy_32 0.63 1.63
divide_dummy_64 0.66 1.71
a At least by specifying unsigned we avoid the whole can of worms related to the right-shift behavior of signed integers in C and C++.
0 Of course, this notation doesn't actually work in C where / truncates the result so the ceiling does nothing. So consider that pseudo-notation rather than straight C.
1 I'm also interested solutions where all types are uint32_t rather than uint64_t.
2 In general, any p and q where p + q >= 2^64 causes an issue, due to overflow.
3 That said, the function should be in a loop, because the performance of a microscopic function that takes half a dozen cycles only really matters if it is called in a fairly tight loop.
4 This is a bit of a simplification - only the dividend p is dependent on the output of the previous iteration, so some work related to processing of q can still be overlapped.
5 Use such estimates with caution however - overhead isn't simply additive. If the overhead shows up as 4 cycles and some function f takes 5, it's likely not accurate to say the cost of the real work in f is 5 - 4 == 1, because of the way execution is overlapped.
This answer is about what's ideal in asm; what we'd like to convince the compiler to emit for us. (I'm not suggesting actually using inline asm, except as a point of comparison when benchmarking compiler output. https://gcc.gnu.org/wiki/DontUseInlineAsm).
I did manage to get pretty good asm output from pure C for ceil_div_andmask, see below. (It's worse than a CMOV on Broadwell/Skylake, but probably good on Haswell. Still, the user23/chux version looks even better for both cases.) It's mostly just worth mentioning as one of the few cases where I got the compiler to emit the asm I wanted.
It looks like Chris Dodd's general idea of return ((p-1) >> lg(q)) + 1 with special-case handling for d=0 is one of the best options. I.e. the optimal implementation of it in asm is hard to beat with an optimal implementation of anything else. Chux's (p >> lg(q)) + (bool)(p & (q-1)) also has advantages (like lower latency from p->result), and more CSE when the same q is used for multiple divisions. See below for a latency/throughput analysis of what gcc does with it.
If the same e = lg(q) is reused for multiple dividends, or the same dividend is reused for multiple divisors, different implementations can CSE more of the expression. They can also effectively vectorize with AVX2.
Branches are cheap and very efficient if they predict very well, so branching on d==0 will be best if it's almost never taken. If d==0 is not rare, branchless asm will perform better on average. Ideally we can write something in C that will let gcc make the right choice during profile-guided optimization, and compiles to good asm for either case.
Since the best branchless asm implementations don't add much latency vs. a branchy implementation, branchless is probably the way to go unless the branch would go the same way maybe 99% of the time. This might be likely for branching on p==0, but probably less likely for branching on p & (q-1).
It's hard to guide gcc5.4 into emitting anything optimal. This is my work-in-progress on Godbolt).
I think the optimal sequences for Skylake for this algorithm are as follows. (Shown as stand-alone functions for the AMD64 SysV ABI, but talking about throughput/latency on the assumption that the compiler will emit something similar inlined into a loop, with no RET attached).
Branch on carry from calculating d-1 to detect d==0, instead of a separate test & branch. Reduces the uop count nicely, esp on SnB-family where JC can macro-fuse with SUB.
ceil_div_pjc_branch:
xor eax,eax ; can take this uop off the fast path by adding a separate xor-and-return block, but in reality we want to inline something like this.
sub rdi, 1
jc .d_was_zero ; fuses with the sub on SnB-family
tzcnt rax, rsi ; tzcnt rsi,rsi also avoids any false-dep problems, but this illustrates that the q input can be read-only.
shrx rax, rdi, rax
inc rax
.d_was_zero:
ret
Fused-domain uops: 5 (not counting ret), and one of them is an xor-zero (no execution unit)
HSW/SKL latency with successful branch prediction:
(d==0): No data dependency on d or q, breaks the dep chain. (control dependency on d to detect mispredicts and retire the branch).
(d!=0): q->result: tzcnt+shrx+inc = 5c
(d!=0): d->result: sub+shrx+inc = 3c
Throughput: probably just bottlenecked on uop throughput
I've tried but failed to get gcc to branch on CF from the subtract, but it always wants to do a separate comparison. I know gcc can be coaxed into branching on CF after subtracting two variables, but maybe this fails if one is a compile-time constant. (IIRC, this typically compiles to a CF test with unsigned vars: foo -= bar; if(foo>bar) carry_detected = 1;)
Branchless with ADC / SBB to handle the d==0 case. Zero-handling adds only one instruction to the critical path (vs. a version with no special handling for d==0), but also converts one other from an INC to a sbb rax, -1 to make CF undo the -= -1. Using a CMOV is cheaper on pre-Broadwell, but takes extra instructions to set it up.
ceil_div_pjc_asm_adc:
tzcnt rsi, rsi
sub rdi, 1
adc rdi, 0 ; d? d-1 : d. Sets CF=CF
shrx rax, rdi, rsi
sbb rax, -1 ; result++ if d was non-zero
ret
Fused-domain uops: 5 (not counting ret) on SKL. 7 on HSW
SKL latency:
q->result: tzcnt+shrx+sbb = 5c
d->result: sub+adc+shrx(dep on q begins here)+sbb = 4c
Throughput: TZCNT runs on p1. SBB, ADC, and SHRX only run on p06. So I think we bottleneck on 3 uops for p06 per iteration, making this run at best one iteration per 1.5c.
If q and d become ready at the same time, note that this version can run SUB/ADC in parallel with the 3c latency of TZCNT. If both are coming from the same cache-miss cache line, it's certainly possible. In any case, introducing the dep on q as late as possible in the d->result dependency chain is an advantage.
Getting this from C seems unlikely with gcc5.4. There is an intrinsic for add-with-carry, but gcc makes a total mess of it. It doesn't use immediate operands for ADC or SBB, and stores the carry into an integer reg between every operation. gcc7, clang3.9, and icc17 all make terrible code from this.
#include <x86intrin.h>
// compiles to completely horrible code, putting the flags into integer regs between ops.
T ceil_div_adc(T d, T q) {
T e = lg(q);
unsigned long long dm1; // unsigned __int64
unsigned char CF = _addcarry_u64(0, d, -1, &dm1);
CF = _addcarry_u64(CF, 0, dm1, &dm1);
T shifted = dm1 >> e;
_subborrow_u64(CF, shifted, -1, &dm1);
return dm1;
}
# gcc5.4 -O3 -march=haswell
mov rax, -1
tzcnt rsi, rsi
add rdi, rax
setc cl
xor edx, edx
add cl, -1
adc rdi, rdx
setc dl
shrx rdi, rdi, rsi
add dl, -1
sbb rax, rdi
ret
CMOV to fix the whole result: worse latency from q->result, since it's used sooner in the d->result dep chain.
ceil_div_pjc_asm_cmov:
tzcnt rsi, rsi
sub rdi, 1
shrx rax, rdi, rsi
lea rax, [rax+1] ; inc preserving flags
cmovc rax, zeroed_register
ret
Fused-domain uops: 5 (not counting ret) on SKL. 6 on HSW
SKL latency:
q->result: tzcnt+shrx+lea+cmov = 6c (worse than ADC/SBB by 1c)
d->result: sub+shrx(dep on q begins here)+lea+cmov = 4c
Throughput: TZCNT runs on p1. LEA is p15. CMOV and SHRX are p06. SUB is p0156. In theory only bottlenecked on fused-domain uop throughput, so one iteration per 1.25c. With lots of independent operations, resource conflicts from SUB or LEA stealing p1 or p06 shouldn't be a throughput problem because at 1 iter per 1.25c, no port is saturated with uops that can only run on that port.
CMOV to get an operand for SUB: I was hoping I could find a way to create an operand for a later instruction that would produce a zero when needed, without an input dependency on q, e, or the SHRX result. This would help if d is ready before q, or at the same time.
This doesn't achieve that goal, and needs an extra 7-byte mov rdx,-1 in the loop.
ceil_div_pjc_asm_cmov:
tzcnt rsi, rsi
mov rdx, -1
sub rdi, 1
shrx rax, rdi, rsi
cmovnc rdx, rax
sub rax, rdx ; res += d ? 1 : -res
ret
Lower-latency version for pre-BDW CPUs with expensive CMOV, using SETCC to create a mask for AND.
ceil_div_pjc_asm_setcc:
xor edx, edx ; needed every iteration
tzcnt rsi, rsi
sub rdi, 1
setc dl ; d!=0 ? 0 : 1
dec rdx ; d!=0 ? -1 : 0 // AND-mask
shrx rax, rdi, rsi
inc rax
and rax, rdx ; zero the bogus result if d was initially 0
ret
Still 4c latency from d->result (and 6 from q->result), because the SETC/DEC happen in parallel with the SHRX/INC. Total uop count: 8. Most of these insns can run on any port, so it should be 1 iter per 2 clocks.
Of course, for pre-HSW, you also need to replace SHRX.
We can get gcc5.4 to emit something nearly as good: (still uses a separate TEST instead of setting mask based on sub rdi, 1, but otherwise the same instructions as above). See it on Godbolt.
T ceil_div_andmask(T p, T q) {
T mask = -(T)(p!=0); // TEST+SETCC+NEG
T e = lg(q);
T nonzero_result = ((p-1) >> e) + 1;
return nonzero_result & mask;
}
When the compiler knows that p is non-zero, it takes advantage and makes nice code:
// http://stackoverflow.com/questions/40447195/can-i-hint-the-optimizer-by-giving-the-range-of-an-integer
#if defined(__GNUC__) && !defined(__INTEL_COMPILER)
#define assume(x) do{if(!(x)) __builtin_unreachable();}while(0)
#else
#define assume(x) (void)(x) // still evaluate it once, for side effects in case anyone is insane enough to put any inside an assume()
#endif
T ceil_div_andmask_nonzerop(T p, T q) {
assume(p!=0);
return ceil_div_andmask(p, q);
}
# gcc5.4 -O3 -march=haswell
xor eax, eax # gcc7 does tzcnt in-place instead of wasting an insn on this
sub rdi, 1
tzcnt rax, rsi
shrx rax, rdi, rax
add rax, 1
ret
Chux / user23_variant
only 3c latency from p->result, and constant q can CSE a lot.
T divide_A_chux(T p, T q) {
bool round_up = p & (q-1); // compiles differently from user23_variant with clang: AND instead of
return (p >> lg(q)) + round_up;
}
xor eax, eax # in-place tzcnt would save this
xor edx, edx # target for setcc
tzcnt rax, rsi
sub rsi, 1
test rsi, rdi
shrx rdi, rdi, rax
setne dl
lea rax, [rdx+rdi]
ret
Doing the SETCC before TZCNT would allow an in-place TZCNT, saving the xor eax,eax. I haven't looked at how this inlines in a loop.
Fused-domain uops: 8 (not counting ret) on HSW/SKL
HSW/SKL latency:
q->result: (tzcnt+shrx(p) | sub+test(p)+setne) + lea(or add) = 5c
d->result: test(dep on q begins here)+setne+lea = 3c. (the shrx->lea chain is shorter, and thus not the critical path)
Throughput: Probably just bottlenecked on the frontend, at one iter per 2c. Saving the xor eax,eax should speed this up to one per 1.75c (but of course any loop overhead will be part of the bottleneck, because frontend bottlenecks are like that).
uint64_t exponent = lg(q);
uint64_t mask = q - 1;
// v divide
return (p >> exponent) + (((p & mask) + mask) >> exponent)
// ^ round up
The separate computation of the "round up" part avoids the overflow issues of (p+q-1) >> lg(q). Depending on how smart your compiler is, it might be possible to express the "round up" part as ((p & mask) != 0) without branching.
The efficient way of dividing by a power of 2 for an unsigned integer in C is a right shift -- shifting right one divides by two (rounding down), so shifting right by n divides by 2n (rounding down).
Now you want to round up rather than down, which you can do by first adding 2n-1, or equivalently subtracting one before the shift and adding one after (except for 0). This works out to something like:
unsigned ceil_div(unsigned d, unsigned e) {
/* compute ceil(d/2**e) */
return d ? ((d-1) >> e) + 1 : 0;
}
The conditional can be removed by using the boolean value of d for addition and subtraction of one:
unsigned ceil_div(unsigned d, unsigned e) {
/* compute ceil(d/2**e) */
return ((d - !!d) >> e) + !!d;
}
Due to its size, and the speed requirement, the function should be made static inline. It probably won't make a different for the optimizer, but the parameters should be const. If it must be shared among many files, define it in a header:
static inline unsigned ceil_div(const unsigned d, const unsigned e){...
Efficiently dividing unsigned value by a power of two, rounding up
[Re-write] given OP's clarification concerning power-of-2.
The round-up or ceiling part is easy when overflow is not a concern. Simple add q-1, then shift.
Otherwise as the possibility of rounding depends on all the bits of p smaller than q, detection of those bits is needed first before they are shifted out.
uint64_t divide_A(uint64_t p, uint64_t q) {
bool round_up = p & (q-1);
return (p >> lg64(q)) + round_up;
}
This assumes code has an efficient lg64(uint64_t x) function, which returns the base-2 log of x, if x is a power-of-two.`
My old answer didn't work if p was one more than a power of two (whoops). So my new solution, using the __builtin_ctzll() and __builtin_ffsll() functions0 available in gcc (which as a bonus, provides the fast logarithm you mentioned!):
uint64_t divide(uint64_t p,uint64_t q) {
int lp=__builtin_ffsll(p);
int lq=__builtin_ctzll(q);
return (p>>lq)+(lp<(lq+1)&&lp);
}
Note that this is assuming that a long long is 64 bits. It has to be tweaked a little otherwise.
The idea here is that if we need an overflow if and only if p has fewer trailing zeroes than q. Note that for a power of two, the number of trailing zeroes is equal to the logarithm, so we can use this builtin for the log as well.
The &&lp part is just for the corner case where p is zero: otherwise it will output 1 here.
0 Can't use __builtin_ctzll() for both because it is undefined if p==0.
If the dividend/divisor can be guaranteed not to exceed 63 (or 31) bits, you can use the following version mentioned in the question. Note how p+q could overflow if they use all 64 bit. This would be fine if the SHR instruction shifted in the carry flag, but AFAIK it doesn't.
uint64_t divide(uint64_t p, uint64_t q) {
return (p + q - 1) >> lg(q);
}
If those constraints cannot be guaranteed, you can just do the floor method and then add 1 if it would round up. This can be determined by checking if any bits in the dividend are within the range of the divisor.
Note: p&-p extracts the lowest set bit on 2s complement machines or the BLSI instruction
uint64_t divide(uint64_t p, uint64_t q) {
return (p >> lg(q)) + ( (p & -p ) < q );
}
Which clang compiles to:
bsrq %rax, %rsi
shrxq %rax, %rdi, %rax
blsiq %rdi, %rcx
cmpq %rsi, %rcx
adcq $0, %rax
retq
That's a bit wordy and uses some newer instructions, so maybe there is a way to use the carry flag in the original version. Lets see:
The RCR instruction does but seems like it would be worse ... perhaps the SHRD instruction... It would be something like this (unable to test at the moment)
xor edx, edx ;edx = 0 (will store the carry flag)
bsr rcx, rsi ;rcx = lg(q) ... could be moved anywhere before shrd
lea rax, [rsi-1] ;rax = q-1 (adding p could carry)
add rax, rdi ;rax += p (handle carry)
setc dl ;rdx = carry flag ... or xor rdx and setc
shrd rax, rdx, cl ;rax = rdx:rax >> cl
ret
1 more instruction, but should be compatible with older processors (if it works ... I'm always getting a source/destination swapped - feel free to edit)
Addendum:
I've implemented the lg() function I said was available as follows:
inline uint64_t lg(uint64_t x) {
return 63U - (uint64_t)__builtin_clzl(x);
}
inline uint32_t lg32(uint32_t x) {
return 31U - (uint32_t)__builtin_clz(x);
}
The fast log functions don't fully optimize to bsr on clang and ICC but you can do this:
#if defined(__x86_64__) && (defined(__clang__) || defined(__INTEL_COMPILER))
static inline uint64_t lg(uint64_t x){
inline uint64_t ret;
//other compilers may want bsrq here
__asm__("bsr %0, %1":"=r"(ret):"r"(x));
return ret;
}
#endif
#if defined(__i386__) && (defined(__clang__) || defined(__INTEL_COMPILER))
static inline uint32_t lg32(uint32_t x){
inline uint32_t ret;
__asm__("bsr %0, %1":"=r"(ret):"r"(x));
return ret;
}
#endif
There has already been a lot of human brainpower applied to this problem, with several variants of great answers in C along with Peter Cordes's answer which covers the best you could hope for in asm, with notes on trying to map it back to C.
So while the humans are having their kick at the can, I thought see what some brute computing power has to say! To that end, I used Stanford's STOKE superoptimizer to try to find good solutions to the 32-bit and 64-bit versions of this problem.
Usually, a superoptimizer is usually something like a brute force search through all possible instruction sequences until you find the best one by some metric. Of course, with something like 1,000 instructions that will quickly spiral out of control for more than a few instructions1. STOKE, on the hand, takes a guided randomized approach: it randomly makes mutations to an existing candidate program, evaluating at each step a cost function that takes both performance and correctness into effect. That's the one-liner anyway - there are plenty of papers if that stoked your curiosity.
So within a few minutes STOKE found some pretty interesting solutions. It found almost all the high-level ideas in the existing solutions, plus a few unique ones. For example, for the 32-bit function, STOKE found this version:
neg rsi
dec rdi
pext rax, rsi, rdi
inc eax
ret
It doesn't use any leading/trailing-zero count or shift instructions at all. Pretty much, it uses neg esi to turn the divisor into a mask with 1s in the high bits, and then pext effectively does the shift using that mask. Outside of that trick it's using the same trick that user QuestionC used: decrement p, shift, increment p - but it happens to work even for zero dividend because it uses 64-bit registers to distinguish the zero case from the MSB-set large p case.
I added the C version of this algorithm to the benchmark, and added it to the results. It's competitive with the other good algorithms, tying for first in the "Variable Q" cases. It does vectorize, but not as well as the other 32-bit algorithms, because it needs 64-bit math and so the vectors can process only half as many elements at once.
Even better, in the 32-bit case it came up with a variety of solutions which use the fact that some of the intuitive solutions that fail for edge cases happen to "just work" if you use 64-bit ops for part of it. For example:
tzcntl ebp, esi
dec esi
add rdi, rsi
sarx rax, rdi, rbp
ret
That's the equivalent of the return (p + q - 1) >> lg(q) suggestion I mentioned in the question. That doesn't work in general since for large p + q it overflows, but for 32-bit p and q this solution works great by doing the important parts in 64-bit. Convert that back into C with some casts and it actually figures out that using lea will do the addition in one instruction1:
stoke_32(unsigned int, unsigned int):
tzcnt edx, esi
mov edi, edi ; goes away when inlining
mov esi, esi ; goes away when inlining
lea rax, [rsi-1+rdi]
shrx rax, rax, rdx
ret
So it's a 3-instruction solution when inlined into something that already has the values zero-extended into rdi and rsi. The stand-alone function definition needs the mov instructions to zero-extend because that's how the SysV x64 ABI works.
For the 64-bit function it didn't come up with anything that blows away the existing solutions but it did come up with some neat stuff, like:
tzcnt r13, rsi
tzcnt rcx, rdi
shrx rax, rdi, r13
cmp r13b, cl
adc rax, 0
ret
That guy counts the trailing zeros of both arguments, and then adds 1 to the result if q has fewer trailing zeros than p, since that's when you need to round up. Clever!
In general, it understood the idea that you needed to shl by the tzcnt really quickly (just like most humans) and then came up with a ton of other solutions to the problem of adjusting the result to account for rounding. It even managed to use blsi and bzhi in several solutions. Here's a 5-instruction solution it came up with:
tzcnt r13, rsi
shrx rax, rdi, r13
imul rsi, rax
cmp rsi, rdi
adc rax, 0
ret
It's a basically a "multiply and verify" approach - take the truncated res = p \ q, multiply it back and if it's different than p add one: return res * q == p ? ret : ret + 1. Cool. Not really better than Peter's solutions though. STOKE seems to have some flaws in its latency calculation - it thinks the above has a latency of 5 - but it's more like 8 or 9 depending on the architecture. So it sometimes narrows in solutions that are based on its flawed latency calculation.
1 Interestingly enough though this brute force approach reaches its feasibility around 5-6 instructions: if you assume you can trim the instruction count to say 300 by eliminating SIMD and x87 instructions, then you would need ~28 days to try all 300 ^ 5 5 instruction sequences at 1,000,000 candidates/second. You could perhaps reduce that by a factor of 1,000 with various optimizations, meaning less than an hour for 5-instruction sequences and maybe a week for 6-instruction. As it happens, most of the best solutions for this problem fall into that 5-6 instruction window...
2 This will be a slow lea, however, so the sequence found by STOKE was still optimal for what I optimized for, which was latency.
You can do it like this, by comparing dividing n / d with dividing by (n-1) / d.
#include <stdio.h>
int main(void) {
unsigned n;
unsigned d;
unsigned q1, q2;
double actual;
for(n = 1; n < 6; n++) {
for(d = 1; d < 6; d++) {
actual = (double)n / d;
q1 = n / d;
if(n) {
q2 = (n - 1) / d;
if(q1 == q2) {
q1++;
}
}
printf("%u / %u = %u (%f)\n", n, d, q1, actual);
}
}
return 0;
}
Program output:
1 / 1 = 1 (1.000000)
1 / 2 = 1 (0.500000)
1 / 3 = 1 (0.333333)
1 / 4 = 1 (0.250000)
1 / 5 = 1 (0.200000)
2 / 1 = 2 (2.000000)
2 / 2 = 1 (1.000000)
2 / 3 = 1 (0.666667)
2 / 4 = 1 (0.500000)
2 / 5 = 1 (0.400000)
3 / 1 = 3 (3.000000)
3 / 2 = 2 (1.500000)
3 / 3 = 1 (1.000000)
3 / 4 = 1 (0.750000)
3 / 5 = 1 (0.600000)
4 / 1 = 4 (4.000000)
4 / 2 = 2 (2.000000)
4 / 3 = 2 (1.333333)
4 / 4 = 1 (1.000000)
4 / 5 = 1 (0.800000)
5 / 1 = 5 (5.000000)
5 / 2 = 3 (2.500000)
5 / 3 = 2 (1.666667)
5 / 4 = 2 (1.250000)
5 / 5 = 1 (1.000000)
Update
I posted an early answer to the original question, which works, but did not consider the efficiency of the algorithm, or that the divisor is always a power of 2. Performing two divisions was needlessly expensive.
I am using MSVC 32-bit compiler on a 64-bit system, so there is no chance that I can provide a best solution for the required target. But it is an interesting question so I have dabbled around to find that the best solution will discover the bit of the 2**n divisor. Using the library function log2 worked but was so slow. Doing my own shift was much better, but still my best C solution is
unsigned roundup(unsigned p, unsigned q)
{
return p / q + ((p & (q-1)) != 0);
}
My inline 32-bit assembler solution is faster, but of course that will not answer the question. I steal some cycles by assuming that eax is returned as the function value.
unsigned roundup(unsigned p, unsigned q)
{
__asm {
mov eax,p
mov edx,q
bsr ecx,edx ; cl = bit number of q
dec edx ; q-1
and edx,eax ; p & (q-1)
shr eax,cl ; divide p by q, a power of 2
sub edx,1 ; generate a carry when (p & (q-1)) == 0
cmc
adc eax,0 ; add 1 to result when (p & (q-1)) != 0
}
} ; eax returned as function value
This seems efficient and works for signed if your compiler is using arithmetic right shifts (usually true).
#include <stdio.h>
int main (void)
{
for (int i = -20; i <= 20; ++i) {
printf ("%5d %5d\n", i, ((i - 1) >> 1) + 1);
}
return 0;
}
Use >> 2 to divide by 4, >> 3 to divide by 8, &ct. Efficient lg does the work there.
You can even divide by 1! >> 0

C increment/decrement operators [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question appears to be off-topic because it lacks sufficient information to diagnose the problem. Describe your problem in more detail or include a minimal example in the question itself.
Closed 8 years ago.
Improve this question
Determine the value of each variable after calculation is performed. All variables have value 5 before execution.
A1/=++B1/--C1
A2+=++B2%C2--
please tell me how this work
Variables:
int A1 = 5;
int B1 = 5;
int C1 = 5;
int A2 = 5;
int B2 = 5;
int C2 = 5;
Your code:
A1 /= ++B1 / --C1;
A2 += ++B2 % C2--;
Will probably compile into something similar too:
++B1;
--C1;
A1 /= B1 / C1;
++B2;
A2 += B2 % C2;
C2--;
You can output the ASM using your compiler, with GCC its the -S flag. Here is the ASM output with GCC on my computer (I added the comments):
movl $5, -20(%rbp) // A1 = 5
movl $5, -24(%rbp) // B1 = 5
movl $5, -28(%rbp) // C1 = 5
movl $5, -32(%rbp) // A2 = 5
movl $5, -36(%rbp) // B2 = 5
movl $5, -40(%rbp) // C2 = 5
Then for the first one calculation, this is performed (comments simplified for easier understanding):
addl $1, -24(%rbp) // ++B1
subl $1, -28(%rbp) // --C1
movl -24(%rbp), %eax //
cltd
idivl -28(%rbp) // divide B1 by C1
movl %eax, %esi //
movl -20(%rbp), %eax //
cltd
idivl %esi // divide A1 by the previous
movl %eax, -20(%rbp)
By the C Operator Precedence Table: http://www.difranco.net/compsci/C_Operator_Precedence_Table.htm
For 1:
A1 /= ++B1 / --C1
C1 will first be decremented by 1 to 4
B1 will then be incremented by 1 to 6
B1 (6) will be divided by C1 (4), result of which will be 1
A1 will be assigned with the result of the division A1 (5) and 1, which is 5
Results for each will be 5, 6 and 4 for A1, B1 and C1, respectively.
For 2:
A2 += ++B2 % C2--
C2 will first be marked to get decremented close to the end of the statement, remains 5 for now
B2 will then be incremented by 1 to 6
The remainder from the division B2 (6) by C2 (5) will get calculated, which is 1
A2 will be assigned with the result of the addition of A2 (5) and 1, which is 6
C2 will be decremented by 1 to 4
Results for each will be 6, 6 and 4 for A2, B2 and C2, respectively.
Pardon me if I have made any mistakes, you can always check these with your compiler.

SSE best way to set register to 0.0's and 1.0's?

I am doing some sse vector3 math.
Generally, I set the 4th digit of my vector to 1.0f, as this makes most of my math work, but sometimes I need to set it to 0.0f.
So I want to change something like:
(32.4f, 21.2f, -4.0f, 1.0f) to (32.4f, 21.2f, -4.0f, 0.0f)
I was wondering what the best method to doing so would be:
Convert to 4 floats, set 4th float, send back to SSE
xor a register with itself, then do 2 shufps
Do all the SSE math with 1.0f and then set the variables to what they should be when finished.
Other?
Note: The vector is already in a SSE register when I need to change it.
AND with a constant mask.
In assembly ...
myMask:
.long 0xffffffff, 0xffffffff, 0xffffffff, 0x00000000
...
andps myMask, %xmm#
where # = {0, 1, 2, ....}
Hope this helps.
Assuming your original vector is in xmm0:
; xmm0 = [x y z w]
xorps %xmm1, %xmm1 ; [0 0 0 0]
pcmpeqs %xmm2, %xmm2 ; [1 1 1 1]
movss %xmm1, %xmm2 ; [0 1 1 1]
pshufd $0x20, %xmm1, %xmm2 ; [1 1 1 0]
andps %xmm2, %xmm0 ; [x y z 0]
should be fast since it does not access memory.
If you want to do it without memory access, you could realize that the value 1 has a zero word in it, and the value zero is all zeroes. So, you can just copy the zero word to the other. If you have the 1 in the highest dword, pshufhw xmm0, xmm0, 0xa4 should do the trick:
(gdb) ni
4 pshufhw $0xa4, %xmm0, %xmm0
(gdb) p $xmm0.v4_float
$4 = {32.4000015, 21.2000008, -4, 1}
(gdb) ni
5 ret
(gdb) p $xmm0.v4_float
$5 = {32.4000015, 21.2000008, -4, 0}
The similar trick for the other locations is left as an excercise to the reader :)
pinsrw?
Why not multiply your vector element wise with [1 1 1 0]? I'm pretty sure there is an SSE instruction for element wise multiplication.
Then to go back to a vector with a 1 in the 4th dimension, just add [0 0 0 1]. Again there is an SSE instruction for that, too.

Resources