Related
Started to practice assembly, not too long ago.
I want to implement an efficient multiplying through assembly commands lea and shift.
I want to write a c program that will call an assembly procedure that fits an constant argument recieved by the user and will multiply another argument recieved by the user by that constant.
How can I make this code effective?
What numbers can I group (if any) to fit the same procedure?
for example I think that I can group 2,4,8,... to the same procedure as they are just a left shift by 1,2,3 for example.
But I'm having trouble finding other groups like this one with other numbers and what about negatives...
The interesting part of this exercise is finding ways to use 1 or 2 LEA, SHL, and/or ADD/SUB instructions to implement multiplies by various constants.
Actually dispatching on the fly for a single multiply isn't very interesting, and would mean either actual JIT compiling or that you have every possible sequence already present in a giant table of tiny blocks of code. (Like switch statements.)
Instead I'd suggest writing a C or Python or whatever function that takes 1 integer arg, and as output produces the asm source text that implements x * n where n is the integer arg. i.e. a function like you might find in a compiler that optimizes a multiply-by-constant.
You might want to cook up an automated way to test this, e.g. by comparing against a pure C x * n for a couple different x values.
If you can't get the job done in 2 instructions (or 3 with one of them being mov), it's not worth it. Modern x86 has ridiculously efficient multiply in hardware. imul reg, r/m, imm is 1 uop, 3 cycle latency, fully pipelined. (AMD since Zen, Intel since Core2 or Nehalem or so.) That's your fallback for anything that you can't get done with a critical path length of 1 or 2 cycles (assuming zero-latency mov if you want, like IvyBridge+ and Zen.)
Or you could set a higher threshold before fallback if you want to explore more complicated sequences, e.g. aim for 64-bit multiply on Bulldozer-family (6 cycle latency). https://agner.org/optimize/. Or even P5 Pentium where imul takes 9 cycles (not pairable).
Patterns to look for
Integer multiply boils down to adding up shifted copies of 1 operand where the other operand has 1 bits. (See the algorithm for implementing multiply by runtime-variable values, by shift and add checking each bit one at a time.)
The easiest pattern is of course only a single set bit, i.e. a power of 2; then it's just a left shift. This is easy to check for: n & (n-1) == 0, when n != 0.
Anything with exactly 2 set bits is at most 2 shifts and an add. (GNU C __builtin_popcount(n) counts set bits. In x86 asm, SSE4.2 popcnt).
GNU C __builtin_ctz finds the bit-index of the lowest set bit. Using it on a number you know is non-zero will give you the shift count for the low bit. In x86 asm, bsf / tzcnt.
To clear that lowest set bit and "expose" the next-lowest, you can do n &= n-1;. In x86 asm, BMI1 blsr or LEA / AND.
Another interesting pattern to look for is 2n +- 1. The +1 case is already covered by the 2-set-bits case, but the shift count for the low bit is 0; no shift needed. With shift counts up to 3, you can do it in one LEA.
You can detect 2^n - 1 by checking if n+1 is a power of 2 (has only 1 bit set). Somewhat more complex, (2^n - 1) * 2^m can be done with this trick plus another shift. So you could try right-shifting to bring the lowest set bit to the bottom then looking for tricks.
GCC does this the 2^n - 1 way:
mul15: # gcc -O3 -mtune=bdver2
mov eax, edi
sal eax, 4
sub eax, edi
ret
clang is more efficient (for Intel CPUs where scaled-index is still only 1 cycle latency):
mul15: # clang -O3 -mtune=bdver2
lea eax, [rdi + 4*rdi]
lea eax, [rax + 2*rax]
ret
Combining these patterns
Maybe factorize your number into its prime factors and look for ways to use your building blocks to do combinations of those factors.
But this isn't the only approach. You can do x*11 as x*5*2 + x, like GCC and Clang do this (which is a lot like How to multiply a register by 37 using only 2 consecutive leal instructions in x86?)
lea eax, [rdi + 4*rdi]
lea eax, [rdi + 2*rax]
There are 2 approaches for x*17 as well. GCC and Clang do it this way:
mul17:
mov eax, edi
sal eax, 4
add eax, edi
ret
But another way which they fail to use even with -march=sandybridge (no mov-elimination, 1-cycle LEA [reg + reg*scale]) is:
mul17:
lea eax, [rdi + 8*rdi] ; x*9
lea eax, [rax + 8*rdi] ; x*9 + x*8 = x*17
So instead of multiplying factors, we're adding different multipliers to make the total multiplier.
I don't have any great suggestions how to programmatically search for these sequences beyond the simple ones like 2 set bits, or 2^n +- 1. If you're curious, have a look in GCC or LLVM source code for the functions that do these optimizations; the find a lot of tricky ones.
The work might be split between target-neutral optimization passes for powers of 2 vs. x86-specific target code for using LEA, and for deciding on a threshold of how many instructions is worth it before falling back to imul-immediate.
Negative numbers
x * -8 could be done with x - x*9. I think that might be safe even if x*9 overflows but you'd have to double-check on that.
Look at compiler output
#define MULFUN(c) int mul##c(int x) { return x*c; }
MULFUN(9)
MULFUN(10)
MULFUN(11)
MULFUN(12)
...
I put that on the Godbolt compiler explorer for the x86-64 System V ABI (first arg in RDI, like the above examples). With gcc and clang -O3. I used -mtune=bdver2 (Piledriver) because it has somewhat slower multiply than Intel or Zen. This encourages GCC and Clang to avoid imul slightly more aggressively.
I didn't try if long / uint64_t would change that (6 cycle instead of 4 cycle latency, and half the throughput.) Or if an older uarch like -mtune=nocona (Pentium 4) would make a difference. -mtune=bdver2 did make a difference vs. the default tune=generic for GCC at least.
If you use -m32, you can use even older uarches like -mtune=pentium (in-order P5). I'd recommend -mregparm=3 for that so args are still passed in registers, not the stack.
Recently I had to write a code for critical real time functionality and I used few __builtin_... functions. I understand that such code is not portable because not all the compilers support "__builtin_..." functions or syntax. I was wondering if there is a way to write code in a plain C so that the compiler would be able to recognize it and use some internal "__builtin_..."-like function?
Below is a description of a small experement I did but my question is:
Are there any tips, best known methods, guidelines to write a portable C code so that the compiler would be able to detect (let's put aside the compiler bugs) the pattern and use the maximum ability of the target CPU architecture.
For example reverse bytes in a Dword (so that the first byte become the last one, the last one becomes the first one and so on), the x86_64 architecture has a dedicated assembly instruction for it - bswap. I tried 4 different options:
#include <stdint.h>
#include <stdlib.h>
typedef union _helper_s
{
uint32_t val;
uint8_t bytes[4];
} helper_u;
uint32_t reverse(uint32_t d)
{
helper_u b;
uint8_t temp;
b.val = d;
temp = b.bytes[0];
b.bytes[0] = b.bytes[3];
b.bytes[3] = temp;
temp = b.bytes[1];
b.bytes[1] = b.bytes[2];
b.bytes[2] = temp;
return b.val;
}
uint32_t reverse1(uint32_t d)
{
helper_u b;
uint8_t temp;
b.val = d;
for (size_t i = 0; i < sizeof(uint32_t) / 2; i++)
{
temp = b.bytes[i];
b.bytes[i] = b.bytes[sizeof(uint32_t) - i - 1];
b.bytes[sizeof(uint32_t) - i - 1] = temp;
}
return b.val;
}
uint32_t reverse2(uint32_t d)
{
return (d << 24) | (d >> 24 ) | ((d & 0xFF00) << 8) | ((d & 0xFF0000) >> 8);
}
uint32_t reverse3(uint32_t d)
{
return __builtin_bswap32(d);
}
All the options provide the same functionality. I compiled it with different compilers and different optimization levels, the results were not so good:
GCC - did great! For both -O3 and -Os optimization levels it gave the same result for all the functions:
reverse:
mov eax, edi
bswap eax
ret
reverse1:
mov eax, edi
bswap eax
ret
reverse2:
mov eax, edi
bswap eax
ret
reverse3:
mov eax, edi
bswap eax
ret
Clang a little disappointed me. With the -O3 it gave the same result as GCC however with the -Os it totally lost the path in reverse1. It didn't recognize the pattern and produced way less optimal binary:
reverse1: # #reverse1
lea rax, [rsp - 8]
mov dword ptr [rax], edi
mov ecx, 3
.LBB1_1: # =>This Inner Loop Header: Depth=1
mov sil, byte ptr [rax]
mov dl, byte ptr [rsp + rcx - 8]
mov byte ptr [rax], dl
mov byte ptr [rsp + rcx - 8], sil
dec rcx
inc rax
cmp rcx, 1
jne .LBB1_1
mov eax, dword ptr [rsp - 8]
ret
Actually the difference between reverse and reverse1 is that reverse is the "loop unrolled" version of reverse1, so I assume that with -Os the compiler didn't even try to unroll or try to anticipate the purpose of the for loop.
With the ICC, the things went even worse because it was unable to recognize the pattern in reverse and reverse1 functions both with the -O3 and the -Os optimization levels.
P.S.
I often hear people say that the code has to be written so that even junior programmer would easily be able to understand it and the modern compilers are "smart" enough to take care of the optimizations. Now I have an evidence that it is not true (or at least not always true).
As far as I am aware, the proper way to do this is with conditional compilation.
My suggestion is to write plain normal code in standard C as the default, both for maintainability and as a fall-back path that all compilers can handle. Utilize conditional compilation only as necessary to optimize for specific compilers, with a comment explaining the reason for the exception.
The technique used for reverse2 is fairly idiomatic (here, for example), and your own testing showed that it is properly optimized on all the systems you tested on. To make the implementation easier to understand, you can introduce more whitespace, and follow a more regular pattern.
uint32_t reverse2(uint32_t d)
{
return ((d & 0x000000FFU) << 24) |
((d & 0x0000FF00U) << 8) |
((d & 0x00FF0000U) >> 8) |
((d & 0xFF000000U) >> 24) ;
}
Try It Online : gcc
Try It Online : clang
To your specific points:
Are there any tips, best known methods, guidelines to write a portable C code so that the compiler would be able to detect (let's put aside the compiler bugs) the pattern and use the maximum ability of the target CPU architecture.
The key take away should be to try to write idiomatic code. Judging code to be understandable is somewhat subjective. What may seem clear to me can appear incomprehensible to someone else (and vice versa). However, there are common idioms in C programming that should be followed whenever it is appropriate to do so.
Unfortunately, I do not have at the top of my head a handy list of idioms. But, I can say I largely learned C from reading The C Programming Language (by K & R, of course). And I was an avid reader of C Programming FAQs (by Steve Summit).
However, a very good resource for C idioms can be found by reading and comprehending open source C projects, and of course the source code base of the company you work at. Following the latter has the added benefit that any code you add that follows existing conventions will naturally increase the chances of it being understood by someone else in the company.
I often hear people say that the code has to be written so that even junior programmer would easily be able to understand it and the modern compilers are "smart" enough to take care of the optimizations. Now I have an evidence that it is not true (or at least not always true).
Compilers are just programs, so they cannot read your mind. The compiler will be programmed to look for particular patterns in the AST and apply optimizations to transform the tree into what it considers more optimal. Similarly, the peephole optimizer will look for patterns in the generated machine instructions, and then transform them into fewer equivalent instructions.
But these transformations are only possible if the generated tree or generated instructions follow a recognizable pattern. And these patterns are often determined by analyzing real-world software to see what kind of code gets generated for certain operations. If your code does not result in code that can be recognized by the compiler, you may be partially losing out on the compilers help to optimize.
Thus, another reason to try to write idiomatic C code.
Now, it can be argued that forcing oneself to write idiomatic C is a form of micro-optimization. Should you try to teach the compiler how to optimize the way you write code, or let the compiler teach you how to write code it knows how to optimize? However, the momentum is carried by the existing C programmers that write code idiomatically. New C programmers adopt these idioms for the sake of writing code more easily understood by the people that will be reviewing their code.
I'm trying to compute the bit parity of a large number of uint64's. By bit parity I mean a function that accepts a uint64 and outputs 0 if the number of set bits is even, and 1 otherwise.
Currently I'm using the following function (by #Troyseph, found here):
uint parity64(uint64 n){
n ^= n >> 1;
n ^= n >> 2;
n = (n & 0x1111111111111111) * 0x1111111111111111;
return (n >> 60) & 1;
}
The same SO page has the following assembly routine (by #papadp):
.code
; bool CheckParity(size_t Result)
CheckParity PROC
mov rax, 0
add rcx, 0
jnp jmp_over
mov rax, 1
jmp_over:
ret
CheckParity ENDP
END
which takes advantage of the machine's parity flag. But I cannot get it to work with my C program (I know next to no assembly).
Question. How can I include the above (or similar) code as inline assembly in my C source file, so that the parity64() function runs that instead?
(I'm using GCC with 64-bit Ubuntu 14 on an Intel Xeon Haswell)
In case it's of any help, the parity64() function is called inside the following routine:
uint bindot(uint64* a, uint64* b, uint64 entries){
uint parity = 0;
for(uint i=0; i<entries; ++i)
parity ^= parity64(a[i] & b[i]); // Running sum!
return parity;
}
(This is supposed to be the "dot product" of two vectors over the field Z/2Z, aka. GF(2).)
This may sound a bit harsh, but I believe it needs to be said. Please don't take it personally; I don't mean it as an insult, especially since you already admitted that you "know next to no assembly." But if you think code like this:
CheckParity PROC
mov rax, 0
add rcx, 0
jnp jmp_over
mov rax, 1
jmp_over:
ret
CheckParity ENDP
will beat what a C compiler generates, then you really have no business using inline assembly. In just those 5 lines of code, I see 2 instructions that are glaringly sub-optimal. It could be optimized by just rewriting it slightly:
xor eax, eax
test ecx, ecx ; logically, should use RCX, but see below for behavior of PF
jnp jmp_over
mov eax, 1 ; or possibly even "inc eax"; would need to verify
jmp_over:
ret
Or, if you have random input values that are likely to foil the branch predictor (i.e., there is no predictable pattern to the parity of the input values), then it would be faster yet to remove the branch, writing it as:
xor eax, eax
test ecx, ecx
setp al
ret
Or perhaps the equivalent (which will be faster on certain processors, but not necessarily all):
xor eax, eax
test ecx, ecx
mov ecx, 1
cmovp eax, ecx
ret
And these are just the improvements I could see off the top of my head, given my existing knowledge of the x86 ISA and previous benchmarks that I have conducted. But lest anyone be fooled, this is undoubtedly not the fastest code, because (borrowing from Michael Abrash), "there ain't no such thing as the fastest code"—someone can virtually always make it faster yet.
There are enough problems with using inline assembly when you're an expert assembly-language programmer and a wizard when it comes to the intricacies of the x86 ISA. Optimizers are pretty darn good nowadays, which means it's hard enough for a true guru to produce better code (though certainly not impossible). It also takes trustworthy benchmarks that will verify your assumptions and confirm that your optimized inline assembly is actually faster. Never commit yourself to using inline assembly to outsmart the compiler's optimizer without running a good benchmark. I see no evidence in your question that you've done anything like this. I'm speculating here, but it looks like you saw that the code was written in assembly and assumed that meant it would be faster. That is rarely the case. C compilers ultimately emit assembly language code, too, and it is often more optimal than what us humans are capable of producing, given a finite amount of time and resources, much less limited expertise.
In this particular case, there is a notion that inline assembly will be faster than the C compiler's output, since the C compiler won't be able to intelligently use the x86 architecture's built-in parity flag (PF) to its benefit. And you might be right, but it's a pretty shaky assumption, far from universalizable. As I've said, optimizing compilers are pretty smart nowadays, and they do optimize to a particular architecture (assuming you specify the right options), so it would not at all surprise me that an optimizer would emit code that used PF. You'd have to look at the disassembly to see for sure.
As an example of what I mean, consider the highly specialized BSWAP instruction that x86 provides. You might naïvely think that inline assembly would be required to take advantage of it, but it isn't. The following C code compiles to a BSWAP instruction on almost all major compilers:
uint32 SwapBytes(uint32 x)
{
return ((x << 24) & 0xff000000 ) |
((x << 8) & 0x00ff0000 ) |
((x >> 8) & 0x0000ff00 ) |
((x >> 24) & 0x000000ff );
}
The performance will be equivalent, if not better, because the optimizer has more knowledge about what the code does. In fact, a major benefit this form has over inline assembly is that the compiler can perform constant folding with this code (i.e., when called with a compile-time constant). Plus, the code is more readable (at least, to a C programmer), much less error-prone, and considerably easier to maintain than if you'd used inline assembly. Oh, and did I mention it's reasonably portable if you ever wanted to target an architecture other than x86?
I know I'm making a big deal of this, and I want you to understand that I say this as someone who enjoys the challenge of writing highly-tuned assembly code that beats the compiler's optimizer in performance. But every time I do it, it's just that: a challenge, which comes with sacrifices. It isn't a panacea, and you need to remember to check your assumptions, including:
Is this code actually a bottleneck in my application, such that optimizing it would even make any perceptible difference?
Is the optimizer actually emitting sub-optimal machine language instructions for the code that I have written?
Am I wrong in what I naïvely think is sub-optimal? Maybe the optimizer knows more than I do about the target architecture, and what looks like slow or sub-optimal code is actually faster. (Remember that less code is not necessarily faster.)
Have I tested it in a meaningful, real-world benchmark, and proven that the compiler-generated code is slow and that my inline assembly is actually faster?
Is there absolutely no way that I can tweak the C code to persuade the optimizer to emit better machine code that is close, equal to, or even superior to the performance of my inline assembly?
In an attempt to answer some of these questions, I set up a little benchmark. (Using MSVC, because that's what I have handy; if you're targeting GCC, it's best to use that compiler, but we can still get a general idea. I use and recommend Google's benchmarking library.) And I immediately ran into problems. See, I first run my benchmarks in "debugging" mode, with assertions compiled in that verify that my "tweaked"/"optimized" code is actually producing the same results for all test cases as the original code (that is presumably known to be working/correct). In this case, an assertion immediately fired. It turns out that the CheckParity routine written in assembly language does not return identical results to the parity64 routine written in C! Uh-oh. Well, that's another bullet we need to add to the above list:
Have I ensured that my "optimized" code is returning the correct results?
This one is especially critical, because it's easy to make something faster if you also make it wrong. :-) I jest, but not entirely, because I've done this many times in the pursuit of faster code.
I believe Michael Petch has already pointed out the reason for the discrepancy: in the x86 implementation, the parity flag (PF) only concerns itself with the bits in the low byte, not the entire value. If that's all you need, then great. But even then, we can go back to the C code and further optimize it to do less work, which will make it faster—perhaps faster than the assembly code, eliminating the one advantage that inline assembly ever had.
For now, let's assume that you need the parity of the full value, since that's the original implementation you had that was working, and you're just trying to make it faster without changing its behavior. Thus, we need to fix the assembly code's logic before we can even proceed with meaningfully benchmarking it. Fortunately, since I am writing this answer late, Ajay Brahmakshatriya (with collaboration from others) has already done that work, saving me the extra effort.
…except, not quite. When I first drafted this answer, my benchmark revealed that draft 9 of his "tweaked" code still did not produce the same result as the original C function, so it's unsuitable according to our test cases. You say in a comment that his code "works" for you, which means either (A) the original C code was doing extra work, making it needlessly slow, meaning that you can probably tweak it to beat the inline assembly at its own game, or worse, (B) you have insufficient test cases and the new "optimized" code is actually a bug lying in wait. Since that time, Ped7g suggested a couple of fixes, which both fixed the bug causing the incorrect result to be returned, and further improved the code. The amount of input required here, and the number of drafts that he has gone through, should serve as testament to the difficulty of writing correct inline assembly to beat the compiler. But we're not even done yet! His inline assembly remains incorrectly written. SETcc instructions require an 8-bit register as their operand, but his code doesn't use a register specifier to request that, meaning that the code either won't compile (because Clang is smart enough to detect this error) or will compile on GCC but won't execute properly because that instruction has an invalid operand.
Have I convinced you about the importance of testing yet? I'll take it on faith, and move on to the benchmarking part. The benchmark results use the final draft of Ajay's code, with Ped7g's improvements, and my additional tweaks. I also compare some of the other solutions from that question you linked, modified for 64-bit integers, plus a couple of my own invention. Here are my benchmark results (mobile Haswell i7-4850HQ):
Benchmark Time CPU Iterations
-------------------------------------------------------------------
Naive 36 ns 36 ns 19478261
OriginalCCode 4 ns 4 ns 194782609
Ajay_Brahmakshatriya_Tweaked 4 ns 4 ns 194782609
Shreyas_Shivalkar 37 ns 37 ns 17920000
TypeIA 5 ns 5 ns 154482759
TypeIA_Tweaked 4 ns 4 ns 160000000
has_even_parity 227 ns 229 ns 3200000
has_even_parity_Tweaked 36 ns 36 ns 19478261
GCC_builtin_parityll 4 ns 4 ns 186666667
PopCount 3 ns 3 ns 248888889
PopCount_Downlevel 5 ns 5 ns 100000000
Now, keep in mind that these are for randomly-generated 64-bit input values, which disrupts branch prediction. If your input values are biased in a predictable way, either towards parity or non-parity, then the branch predictor will work for you, rather than against you, and certain approaches may be faster. This underscores the importance of benchmarking against data that simulates real-world use cases. (That said, when I write general library functions, I tend to optimize for random inputs, balancing size and speed.)
Notice how the original C function compares to the others. I'm going to make the claim that optimizing it any further is probably a big fat waste of time. So hopefully you learned something more general from this answer, rather than just scrolled down to copy-paste the code snippets. :-)
The Naive function is a completely unoptimized sanity check to determine the parity, taken from here. I used it to validate even your original C code, and also to provide a baseline for the benchmarks. Since it loops through each bit, one-by-one, it is relatively slow, as expected:
unsigned int Naive(uint64 n)
{
bool parity = false;
while (n)
{
parity = !parity;
n &= (n - 1);
}
return parity;
}
OriginalCCode is exactly what it sounds like—it's the original C code that you had, as shown in the question. Notice how it posts up at exactly the same time as the tweaked/corrected version of Ajay Brahmakshatriya's inline assembly code! Now, since I ran this benchmark in MSVC, which doesn't support inline assembly for 64-bit builds, I had to use an external assembly module containing the function, and call it from there, which introduced some additional overhead. With GCC's inline assembly, the compiler probably would have been able to inline the code, thus eliding a function call. So on GCC, you might see the inline-assembly version be up to a nanosecond faster (or maybe not). Is that worth it? You be the judge. For reference, this is the code I tested for Ajay_Brahmakshatriya_Tweaked:
Ajay_Brahmakshatriya_Tweaked PROC
mov rax, rcx ; Windows 64-bit calling convention passes parameter in ECX (System V uses EDI)
shr rax, 32
xor rcx, rax
mov rax, rcx
shr rax, 16
xor rcx, rax
mov rax, rcx
shr rax, 8
xor eax, ecx ; Ped7g's TEST is redundant; XOR already sets PF
setnp al
movzx eax, al
ret
Ajay_Brahmakshatriya_Tweaked ENDP
The function named Shreyas_Shivalkar is from his answer here, which is just a variation on the loop-through-each-bit theme, and is, in keeping with expectations, slow:
Shreyas_Shivalkar PROC
; unsigned int parity = 0;
; while (x != 0)
; {
; parity ^= x;
; x >>= 1;
; }
; return (parity & 0x1);
xor eax, eax
test rcx, rcx
je SHORT Finished
Process:
xor eax, ecx
shr rcx, 1
jne SHORT Process
Finished:
and eax, 1
ret
Shreyas_Shivalkar ENDP
TypeIA and TypeIA_Tweaked are the code from this answer, modified to support 64-bit values, and my tweaked version. They parallelize the operation, resulting in a significant speed improvement over the loop-through-each-bit strategy. The "tweaked" version is based on an optimization originally suggested by Mathew Hendry to Sean Eron Anderson's Bit Twiddling Hacks, and does net us a tiny speed-up over the original.
unsigned int TypeIA(uint64 n)
{
n ^= n >> 32;
n ^= n >> 16;
n ^= n >> 8;
n ^= n >> 4;
n ^= n >> 2;
n ^= n >> 1;
return !((~n) & 1);
}
unsigned int TypeIA_Tweaked(uint64 n)
{
n ^= n >> 32;
n ^= n >> 16;
n ^= n >> 8;
n ^= n >> 4;
n &= 0xf;
return ((0x6996 >> n) & 1);
}
has_even_parity is based on the accepted answer to that question, modified to support 64-bit values. I knew this would be slow, since it's yet another loop-through-each-bit strategy, but obviously someone thought it was a good approach. It's interesting to see just how slow it actually is, even compared to what I termed the "naïve" approach, which does essentially the same thing, but faster, with less-complicated code.
unsigned int has_even_parity(uint64 n)
{
uint64 count = 0;
uint64 b = 1;
for (uint64 i = 0; i < 64; ++i)
{
if (n & (b << i)) { ++count; }
}
return (count % 2);
}
has_even_parity_Tweaked is an alternate version of the above that saves a branch by taking advantage of the fact that Boolean values are implicitly convertible into 0 and 1. It is substantially faster than the original, clocking in at a time comparable to the "naïve" approach:
unsigned int has_even_parity_Tweaked(uint64 n)
{
uint64 count = 0;
uint64 b = 1;
for (uint64 i = 0; i < 64; ++i)
{
count += static_cast<int>(static_cast<bool>(n & (b << i)));
}
return (count % 2);
}
Now we get into the good stuff. The function GCC_builtin_parityll consists of the assembly code that GCC would emit if you used its __builtin_parityll intrinsic. Several others have suggested that you use this intrinsic, and I must echo their endorsement. Its performance is on par with the best we've seen so far, and it has a couple of additional advantages: (1) it keeps the code simple and readable (simpler than the C version); (2) it is portable to different architectures, and can be expected to remain fast there, too; (3) as GCC improves its implementation, your code may get faster with a simple recompile. You get all the benefits of inline assembly, without any of the drawbacks.
GCC_builtin_parityll PROC ; GCC's __builtin_parityll
mov edx, ecx
shr rcx, 32
xor edx, ecx
mov eax, edx
shr edx, 16
xor eax, edx
xor al, ah
setnp al
movzx eax, al
ret
GCC_builtin_parityll ENDP
PopCount is an optimized implementation of my own invention. To come up with this, I went back and considered what we were actually trying to do. The definition of "parity" is an even number of set bits. Therefore, it can be calculated simply by counting the number of set bits and testing to see if that count is even or odd. That's two logical operations. As luck would have it, on recent generations of x86 processors (Intel Nehalem or AMD Barcelona, and newer), there is an instruction that counts the number of set bits—POPCNT (population count, or Hamming weight)—which allows us to write assembly code that does this in two operations.
(Okay, actually three instructions, because there is a bug in the implementation of POPCNT on certain microarchitectures that creates a false dependency on its destination register, and to ensure we get maximum throughput from the code, we need to break this dependency by pre-clearing the destination register. Fortunately, this a very cheap operation, one that can generally be handled for "free" by register renaming.)
PopCount PROC
xor eax, eax ; break false dependency
popcnt rax, rcx
and eax, 1
ret
PopCount ENDP
In fact, as it turns out, GCC knows to emit exactly this code for the __builtin_parityll intrinsic when you target a microarchitecture that supports POPCNT (otherwise, it uses the fallback implementation shown below). As you can see from the benchmarks, this is the fastest code yet. It isn't a major difference, so it's unlikely to matter unless you're doing this repeatedly within a tight loop, but it is a measurable difference and presumably you wouldn't be optimizing this so heavily unless your profiler indicated that this was a hot-spot.
But the POPCNT instruction does have the drawback of not being available on older processors, so I also measured a "fallback" version of the code that does a population count with a sequence of universally-supported instructions. That is the PopCount_Downlevel function, taken from my private library, originally adapted from this answer and other sources.
PopCount_Downlevel PROC
mov rax, rcx
shr rax, 1
mov rdx, 5555555555555555h
and rax, rdx
sub rcx, rax
mov rax, 3333333333333333h
mov rdx, rcx
and rcx, rax
shr rdx, 2
and rdx, rax
add rdx, rcx
mov rcx, 0FF0F0F0F0F0F0F0Fh
mov rax, rdx
shr rax, 4
add rax, rdx
mov rdx, 0FF01010101010101h
and rax, rcx
imul rax, rdx
shr rax, 56
and eax, 1
ret
PopCount_Downlevel ENDP
As you can see from the benchmarks, all of the bit-twiddling instructions that are required here exact a cost in performance. It is slower than POPCNT, but supported on all systems and still reasonably quick. If you needed a bit count anyway, this would be the best solution, especially since it can be written in pure C without the need to resort to inline assembly, potentially yielding even more speed:
unsigned int PopCount_Downlevel(uint64 n)
{
uint64 temp = n - ((n >> 1) & 0x5555555555555555ULL);
temp = (temp & 0x3333333333333333ULL) + ((temp >> 2) & 0x3333333333333333ULL);
temp = (temp + (temp >> 4)) & 0x0F0F0F0F0F0F0F0FULL;
temp = (temp * 0x0101010101010101ULL) >> 56;
return (temp & 1);
}
But run your own benchmarks to see if you wouldn't be better off with one of the other implementations, like OriginalCCode, which simplifies the operation and thus requires fewer total instructions. Fun fact: Intel's compiler (ICC) always uses a population count-based algorithm to implement __builtin_parityll; it emits a POPCNT instruction if the target architecture supports it, or otherwise, it simulates it using essentially the same code as I've shown here.
Or, better yet, just forget the whole complicated mess and let your compiler deal with it. That's what built-ins are for, and there's one for precisely this purpose.
Because C sucks when handling bit operations, I suggest using gcc built in functions, in this case __builtin_parityl(). See:
https://gcc.gnu.org/onlinedocs/gcc/Other-Builtins.html
You will have to use extended inline assembly (which is a gcc extension) to get the similar effect.
Your parity64 function can be changed as follows -
uint parity64_unsafe_and_broken(uint64 n){
uint result = 0;
__asm__("addq $0, %0" : : "r"(n) :);
// editor's note: compiler-generated instructions here can destroy EFLAGS
// Don't depending on FLAGS / regs surviving between asm statements
// also, jumping out of an asm statement safely requires asm goto
__asm__("jnp 1f");
__asm__("movl $1, %0" : "=r"(result) : : );
__asm__("1:");
return result;
}
But as commented by #MichaelPetch the parity flag is computed only on the lower 8 bits. So this will work for your if your n is less than 255. For bigger numbers you will have to use the code you mentioned in your question.
To get it working for 64 bits you can collapse the parity of the 32 bit integer into single byte by doing
n = (n >> 32) ^ n;
n = (n >> 16) ^ n;
n = (n >> 8) ^ n;
This code will have to be just at the start of the function before the assembly.
You will have to check how it affects the performance.
The most optimized I could get it is
uint parity64(uint64 n){
unsigned char result = 0;
n = (n >> 32) ^ n;
n = (n >> 16) ^ n;
n = (n >> 8) ^ n;
__asm__("test %1, %1 \n\t"
"setp %0"
: "+r"(result)
: "r"(n)
:
);
return result;
}
How can I include the above (or similar) code as inline assembly in my C source file, so that the parity64() function runs that instead?
This is an XY problem... You think you need to inline that assembly to gain from its benefits, so you asked about how to inline it... but you don't need to inline it.
You shouldn't include assembly into your C source code, because in this case you don't need to, and the better alternative (in terms of portability and maintainability) is to keep the two pieces of source code separate, compile them separately and use the linker to link them.
In parity64.c you should have your portable version (with a wrapper named bool CheckParity(size_t result)), which you can default to in non-x86/64 situations.
You can compile this to an object file like so: gcc -c parity64.c -o parity64.o
... and then link the object code generated from assembly, with the C code: gcc bindot.c parity64.o -o bindot
In parity64_x86.s you might have the following assembly code from your question:
.code
; bool CheckParity(size_t Result)
CheckParity PROC
mov rax, 0
add rcx, 0
jnp jmp_over
mov rax, 1
jmp_over:
ret
CheckParity ENDP
END
You can compile this to an alternative parity64.o object file object code using gcc with this command: gcc -c parity64_x86.s -o parity64.o
... and then link the object code generated like so: gcc bindot.c parity64.o -o bindot
Similarly, if you wanted to use __builtin_parityl instead (as suggested by hdantes answer, you could (and should) once again keep that code separate (in the same place you keep other gcc/x86 optimisations) from your portable code. In parity64_x86.c you might have:
bool CheckParity(size_t result) {
return __builtin_parityl(result);
}
To compile this, your command would be: gcc -c parity64_x86.c -o parity64.o
... and then link the object code generated like so: gcc bindot.c parity64.o -o bindot
On a side-note, if you'd like to inspect the assembly gcc would produce from this: gcc -S parity64_x86.c
Comments in your assembly indicate that the equivalent function prototype in C would be bool CheckParity(size_t Result), so with that in mind, here's what bindot.c might look like:
extern bool CheckParity(size_t Result);
uint64_t bindot(uint64_t *a, uint64_t *b, size_t entries){
uint64_t parity = 0;
for(size_t i = 0; i < entries; ++i)
parity ^= a[i] & b[i]; // Running sum!
return CheckParity(parity);
}
You can build this and link it to any of the above parity64.o versions like so: gcc bindot.c parity64.o -o bindot...
I highly recommend reading the manual for your compiler, when you have the time...
In x86 assembly, the overflow flag is set when an add or sub operation on a signed integer overflows, and the carry flag is set when an operation on an unsigned integer overflows.
However, when it comes to the inc and dec instructions, the situation seems to be somewhat different. According to this website, the inc instruction does not affect the carry flag at all.
But I can't find any information about how inc and dec affect the overflow flag, if at all.
Do inc or dec set the overflow flag when an integer overflow occurs? And is this behavior the same for both signed and unsigned integers?
============================= EDIT =============================
Okay, so essentially the consensus here is that INC and DEC should behave the same as ADD and SUB, in terms of setting flags, with the exception of the carry flag. This is also what it says in the Intel manual.
The problem is I can't actually reproduce this behavior in practice, when it comes to unsigned integers.
Consider the following assembly code (using GCC inline assembly to make it easier to print out results.)
int8_t ovf = 0;
__asm__
(
"movb $-128, %%bh;"
"decb %%bh;"
"seto %b0;"
: "=g"(ovf)
:
: "%bh"
);
printf("Overflow flag: %d\n", ovf);
Here we decrement a signed 8-bit value of -128. Since -128 is the smallest possible value, an overflow is inevitable. As expected, this prints out: Overflow flag: 1
But when we do the same with an unsigned value, the behavior isn't as I expect:
int8_t ovf = 0;
__asm__
(
"movb $255, %%bh;"
"incb %%bh;"
"seto %b0;"
: "=g"(ovf)
:
: "%bh"
);
printf("Overflow flag: %d\n", ovf);
Here I increment an unsigned 8-bit value of 255. Since 255 is the largest possible value, an overflow is inevitable. However, this prints out: Overflow flag: 0.
Huh? Why didn't it set the overflow flag in this case?
The overflow flag is set when an operation would cause a sign change. Your code is very close. I was able to set the OF flag with the following (VC++) code:
char ovf = 0;
_asm {
mov bh, 127
inc bh
seto ovf
}
cout << "ovf: " << int(ovf) << endl;
When BH is incremented the MSB changes from a 0 to a 1, causing the OF to be set.
This also sets the OF:
char ovf = 0;
_asm {
mov bh, 128
dec bh
seto ovf
}
cout << "ovf: " << int(ovf) << endl;
Keep in mind that the processor does not distinguish between signed and unsigned numbers. When you use 2's complement arithmetic, you can have one set of instructions that handle both. If you want to test for unsigned overflow, you need to use the carry flag. Since INC/DEC don't affect the carry flag, you need to use ADD/SUB for that case.
Intel® 64 and IA-32 Architectures Software Developer's Manuals
Look at the appropriate manual Instruction Set Reference, A-M. Every instruction is precisely documented.
Here is the INC section on affected flags:
The CF flag is not affected. The OF, SZ, ZF, AZ, and PF flags are set according to the result.
try changing your test to pass in the number rather than hard code it, then have a loop that tries all 256 numbers to find the one if any that affects the flag. Or have the asm perform the loop and exit out when it hits the flag and or when it wraps around to the number it started with (start with something other than 0x00, 0x7f, 0x80, or 0xFF).
EDIT
.globl inc
inc:
mov $33, %eax
top:
inc %al
jo done
jmp top
done:
ret
.globl dec
dec:
mov $33, %eax
topx:
dec %al
jo donex
jmp topx
donex:
ret
Inc overflows when it goes from 0x7F to 0x80. dec overflows when it goes from 0x80 to 0x7F, I suspect the problem is in the way you are using inline assembler.
As many of the other answers have pointed out, INC and DEC do not affect the CF, whereas ADD and SUB do.
What has not been said yet, however, is that this might make a performance difference. Not that you'd usually be bothered by that unless you are trying to optimise the hell out of a routine, but essentially not setting the CF means that INC/DEC only write to part of the flags register, which can cause a partial flag register stall, see Intel 64 and IA-32 Architectures Optimization Reference Manual or Agner Fog's optimisation manuals.
Except for the carry flag inc sets the flags the same way as add operand 1 would.
The fact that inc does not affect the carry flag is very important.
http://oopweb.com/Assembly/Documents/ArtOfAssembly/Volume/Chapter_6/CH06-2.html#HEADING2-117
The CPU/ALU is only capable of handling unsigned binary numbers, and then it uses OF, CF, AF, SF, ZF, etc., to allow you to decide whether to use it as a signed number (OF), an unsigned number (CF) or a BCD number (AF).
About your problem, remember to consider the binary numbers themselves, as unsigned.
**Also, the overflow and the OF require 3 numbers: The input number, a second number to use in the arithmetic, and the result number.
Overflow is activated only if the first and second numbers have the same value for the sign bit (the most significant bit) and the result has a different sign. As in, adding 2 negative numbers resulted in a positive number, or adding 2 positive numbers resulted in a negative number:
if( (Sign_Num1==Sign_Num2) && (Sign_Result!=Sign_Num1) ) OF=1;
else OF=0;
For your first problem, you are using -128 as the first number. The second number is implicitly -1, used by the DEC instruction. So we really have the binary numbers 0x80 and 0xFF. Both them have the sign bit set to 1. The result is 0x7F, which is a number with the sign bit set to 0. We got 2 initial numbers with the same sign, and a result with a different sign, so we indicate an overflow. -128-1 resulted in 127, and thus the overflow flag is set to indicate a wrong signed result.
For your second problem, you are using 255 as the first number. The second number is implicitly 1, used by the INC instruction. So we really have the binary numbers 0xFF and 0x01. Both them have a different sign bit, so it is not possible to get an overflow (it is only possible to overflow when basically adding 2 numbers of the same sign, but it is never possible to overflow with 2 numbers of a different sign because they will never lead to go beyond the possible signed value). The result is 0x00, and it doesn't set the overflow flag because 255+1, or more exactly, -1+1 gives 0, which is obviously correct for signed arithmetic.
Remember that for the overflow flag to be set, the 2 numbers being added/subtracted need to have the sign bit with the same value, and then the result must have a sign bit with a value different from them.
What the processor does is set the appropriate flags for the results of these instructions (add, adc, dec, inc, sbb, sub) for both the signed and unsigned cases i e two different flag results for every op. The alternative would be having two sets of instructions where one sets signed-related flags and the other the unsigned-related. If the issuing compiler is using unsigned variables in the operation it will test carry and zero (jc, jnc, jb, jbe etc), if signed it tests overflow, sign and zero (jo, jno, jg, jng, jl, jle etc).
What is the fastest way you know to convert a floating-point number to an int on an x86 CPU. Preferrably in C or assembly (that can be in-lined in C) for any combination of the following:
32/64/80-bit float -> 32/64-bit integer
I'm looking for some technique that is faster than to just let the compiler do it.
It depends on if you want a truncating conversion or a rounding one and at what precision. By default, C will perform a truncating conversion when you go from float to int. There are FPU instructions that do it but it's not an ANSI C conversion and there are significant caveats to using it (such as knowing the FPU rounding state). Since the answer to your problem is quite complex and depends on some variables you haven't expressed, I recommend this article on the issue:
http://www.stereopsis.com/FPU.html
Packed conversion using SSE is by far the fastest method, since you can convert multiple values in the same instruction. ffmpeg has a lot of assembly for this (mostly for converting the decoded output of audio to integer samples); check it for some examples.
A commonly used trick for plain x86/x87 code is to force the mantissa part of the float to represent the int. 32 bit version follows.
The 64-bit version is analogical. The Lua version posted above is faster, but relies on the truncation of double to a 32-bit result, therefore it requires the x87 unit to be set to double precision, and cannot be adapted for double to 64-bit int conversion.
The nice thing about this code is it is completely portable for all platforms conforming to IEEE 754, the only assumption made is the floating point rounding mode is set to nearest. Note: Portable in the sense it compiles and works. Platforms other than x86 usually do not benefit much from this technique, if at all.
static const float Snapper=3<<22;
union UFloatInt {
int i;
float f;
};
/** by Vlad Kaipetsky
portable assuming FP24 set to nearest rounding mode
efficient on x86 platform
*/
inline int toInt( float fval )
{
Assert( fabs(fval)<=0x003fffff ); // only 23 bit values handled
UFloatInt &fi = *(UFloatInt *)&fval;
fi.f += Snapper;
return ( (fi.i)&0x007fffff ) - 0x00400000;
}
There is one instruction to convert a floating point to an int in assembly: use the FISTP instruction. It pops the value off the floating-point stack, converts it to an integer, and then stores at at the address specified. I don't think there would be a faster way (unless you use extended instruction sets like MMX or SSE, which I am not familiar with).
Another instruction, FIST, leaves the value on the FP stack but I'm not sure it works with quad-word sized destinations.
If you can guarantee the CPU running your code is SSE3 compatible (even Pentium 5 is, JBB), you can allow the compiler to use its FISTTP instruction (i.e. -msse3 for gcc). It seems to do the thing like it should always have been done:
http://software.intel.com/en-us/articles/how-to-implement-the-fisttp-streaming-simd-extensions-3-instruction/
Note that FISTTP is different from FISTP (that has its problems, causing the slowness). It comes as part of SSE3 but is actually (the only) X87-side refinement.
Other then X86 CPU's would probably do the conversion just fine, anyways. :)
Processors with SSE3 support
The Lua code base has the following snippet to do this (check in src/luaconf.h from www.lua.org).
If you find (SO finds) a faster way, I'm sure they'd be thrilled.
Oh, lua_Number means double. :)
/*
## lua_number2int is a macro to convert lua_Number to int.
## lua_number2integer is a macro to convert lua_Number to lua_Integer.
** CHANGE them if you know a faster way to convert a lua_Number to
** int (with any rounding method and without throwing errors) in your
** system. In Pentium machines, a naive typecast from double to int
** in C is extremely slow, so any alternative is worth trying.
*/
/* On a Pentium, resort to a trick */
#if defined(LUA_NUMBER_DOUBLE) && !defined(LUA_ANSI) && !defined(__SSE2__) && \
(defined(__i386) || defined (_M_IX86) || defined(__i386__))
/* On a Microsoft compiler, use assembler */
#if defined(_MSC_VER)
#define lua_number2int(i,d) __asm fld d __asm fistp i
#define lua_number2integer(i,n) lua_number2int(i, n)
/* the next trick should work on any Pentium, but sometimes clashes
with a DirectX idiosyncrasy */
#else
union luai_Cast { double l_d; long l_l; };
#define lua_number2int(i,d) \
{ volatile union luai_Cast u; u.l_d = (d) + 6755399441055744.0; (i) = u.l_l; }
#define lua_number2integer(i,n) lua_number2int(i, n)
#endif
/* this option always works, but may be slow */
#else
#define lua_number2int(i,d) ((i)=(int)(d))
#define lua_number2integer(i,d) ((i)=(lua_Integer)(d))
#endif
I assume truncation is required, same as if one writes i = (int)f in "C".
If you have SSE3, you can use:
int convert(float x)
{
int n;
__asm {
fld x
fisttp n // the extra 't' means truncate
}
return n;
}
Alternately, with SSE2 (or in x64 where inline assembly might not be available), you can use almost as fast:
#include <xmmintrin.h>
int convert(float x)
{
return _mm_cvtt_ss2si(_mm_load_ss(&x)); // extra 't' means truncate
}
On older computers there is an option to set the rounding mode manually and perform conversion using the ordinary fistp instruction. That will probably only work for arrays of floats, otherwise care must be taken to not use any constructs that would make the compiler change rounding mode (such as casting). It is done like this:
void Set_Trunc()
{
// cw is a 16-bit register [_ _ _ ic rc1 rc0 pc1 pc0 iem _ pm um om zm dm im]
__asm {
push ax // use stack to store the control word
fnstcw word ptr [esp]
fwait // needed to make sure the control word is there
mov ax, word ptr [esp] // or pop ax ...
or ax, 0xc00 // set both rc bits (alternately "or ah, 0xc")
mov word ptr [esp], ax // ... and push ax
fldcw word ptr [esp]
pop ax
}
}
void convertArray(int *dest, const float *src, int n)
{
Set_Trunc();
__asm {
mov eax, src
mov edx, dest
mov ecx, n // load loop variables
cmp ecx, 0
je bottom // handle zero-length arrays
top:
fld dword ptr [eax]
fistp dword ptr [edx]
loop top // decrement ecx, jump to top
bottom:
}
}
Note that the inline assembly only works with Microsoft's Visual Studio compilers (and maybe Borland), it would have to be rewritten to GNU assembly in order to compile with gcc.
The SSE2 solution with intrinsics should be quite portable, however.
Other rounding modes are possible by different SSE2 intrinsics or by manually setting the FPU control word to a different rounding mode.
If you really care about the speed of this make sure your compiler is generating the FIST instruction. In MSVC you can do this with /QIfist, see this MSDN overview
You can also consider using SSE intrinsics to do the work for you, see this article from Intel: http://softwarecommunity.intel.com/articles/eng/2076.htm
Since MS scews us out of inline assembly in X64 and forces us to use intrinsics, I looked up which to use. MSDN doc gives _mm_cvtsd_si64x with an example.
The example works, but is horribly inefficient, using an unaligned load of 2 doubles, where we need just a single load, so getting rid of the additional alignment requirement. Then a lot of needless loads and reloads are produced, but they can be eliminated as follows:
#include <intrin.h>
#pragma intrinsic(_mm_cvtsd_si64x)
long long _inline double2int(const double &d)
{
return _mm_cvtsd_si64x(*(__m128d*)&d);
}
Result:
i=double2int(d);
000000013F651085 cvtsd2si rax,mmword ptr [rsp+38h]
000000013F65108C mov qword ptr [rsp+28h],rax
The rounding mode can be set without inline assembly, e.g.
_control87(_RC_NEAR,_MCW_RC);
where rounding to nearest is default (anyway).
The question whether to set the rounding mode at each call or to assume it will be restored (third party libs) will have to be answered by experience, I guess.
You will have to include float.h for _control87() and related constants.
And, no, this will not work in 32 bits, so keep using the FISTP instruction:
_asm fld d
_asm fistp i
Generally, you can trust the compiler to be efficient and correct. There is usually nothing to be gained by rolling your own functions for something that already exists in the compiler.