I was reading Knuth's The Art of Computer Programming and I noticed that he indicates that the DIV command takes 6 times longer than the ADD command in his MIX assembly language.
To test the relevancy to modern architecture, I wrote the following code snippet:
#include <time.h>
#include <stdio.h>
#include <stdlib.h>
int main(int argc, char *argv[])
{
clock_t start;
unsigned int ia=0,ib=0,ic=0;
int i;
float fa=0.0,fb=0.0,fc=0.0;
int sample_size=100000;
if (argc > 1)
sample_size = atoi(argv[1]);
#define TEST(OP) \
start = clock();\
for (i = 0; i < sample_size; ++i)\
ic += (ia++) OP ((ib--)+1);\
printf("%d,", (int)(clock() - start))
TEST(+);
TEST(*);
TEST(/);
TEST(%);
TEST(>>);
TEST(<<);
TEST(&);
TEST(|);
TEST(^);
#undef TEST
//TEST must be redefined for floating point types
#define TEST(OP) \
start = clock();\
for (i = 0; i < sample_size; ++i)\
fc += (fa+=0.5) OP ((fb-=0.5)+1);\
printf("%d,", (int)(clock() - start))
TEST(+);
TEST(*);
TEST(/);
#undef TEST
printf("\n");
return ic+fc;//to prevent optimization!
}
I then generated 4000 test samples (each containing a sample size of 100000 operations of each type) using this command line:
for i in {1..4000}; do ./test >> output.csv; done
Finally, I opened the results with Excel and graphed the averages. What I found was rather surprising. Here is the graph of the results:
The actual averages were (from left-to-right): 463.36475,437.38475,806.59725,821.70975,419.56525,417.85725,426.35975,425.9445,423.792,549.91975,544.11825,543.11425
Overall this is what I expected (division and modulo are slow, as are floating point results).
My question is: why do both integer and floating-point multiplication execute faster than their addition counterparts? It is a small factor, but it is consistent across numerous tests. In TAOCP Knuth lists ADD as taking 2 units of time while MUL takes 10. Did something change in CPU architecture since then?
Different instructions take different amounts of time on the same CPU; and the same instructions can take different amounts of time on different CPUs. For example, for Intel's original Pentium 4 shifting was relatively expensive and addition was quite fast, so adding a register to itself was faster than shifting a register left by 1; and for Intel's recent CPUs shifting and addition are roughly the same speed (shifting is faster than it was on the original Pentium 4 and addition slower, in terms of "cycles").
To complicate things more, different CPUs may be able to do more or less at the same time, and have other differences that effect performance.
In theory (and not necessarily in practice):
Shifting and boolean operations (AND, OR, XOR) should be fastest (each bit can be done in parallel). Addition and subtraction should be next (relatively simple, but all bits of the result can't be done in parallel because of the carry from one pair of bits to the next).
Multiplication should be much slower as it involves many additions, but some of those additions can be done in parallel. For a simple example (using decimal digits not binary) something like 12 * 34 (with multiple digits) can be broken down into "single digit" form and becomes 2*4 + 2*3 * 10 + 1*4 * 10 + 1*3 * 100; where all "single digit" multiplications can be done in parallel, then 2 additions can be done in parallel, then the last addition can be done.
Division is mostly "compare and subtract if larger, repeated". It's the slowest because it can't be done in parallel (the results of the subtraction are needed for the next comparison). Modulo is the remainder of a division and essentially identical to division (and for most CPUs it's actually the same instruction - e.g. a DIV instruction gives you a quotient and a remainder).
For floating point; each number has 2 parts (significand and exponent), so things get a little more complicated. Floating point shifting is actually adding to or subtracting from the exponent (and should cost roughly the same as integer addition/subtraction). For floating point addition, subtraction and boolean operations you need to equalise the exponents, and after that you do the operation on the significands alone (and the "equalising" and "doing the operation" can't be done in parallel). Multiplication is multiplying the significands and adding the exponents (and adjusting the bias), where both parts can be done in parallel so the total cost is whichever is slowest (multiplying the significands); so it's as fast as integer multiplication. Division is dividing the significands and subtracting the exponents (and adjusting the bias), where both parts can be done in parallel and total cost is whichever is slowest (dividing the significands); so it's as fast as integer division.
Note: I've simplified in various places to make it much easier to understand.
to test the execution time, look at the instructions produced in the assembly listing and look at the documentation for the processor for those instructions and note if the FPU is performing the operation or if it is directly performed in the code.
Then, add up the execution time for each instruction.
However, if the cpu is pipelined or multi threaded, the operation could take MUCH less time than calculated.
It is true that division and modulo (a division operation) is slower than addition. The reason behind this is the design of ALU (Arithmetic Logical Unit). The ALU is combination of parallel adders and logic-circuits. Division is performed by repeated subtraction, therefore needs more level of subtract logic making division slower than addition. The propagation delays of gates involved in division adds cherry on cake.
Related
I'm trying to implement finite field arithmetic to use it in Elliptic Curve calculations. Since all that's ever used are arithmetic operations that commute with the modulo operator, I don't see a reason not to delaying that operation till the very end. One thing that may happen is that the numbers involved might become (way) too big and impractical/inefficient to work with, but I was wondering if there was a way to determine the optimal conditions/frequency which should trigger a modulo operation in the calculations.
I'm coding in C.
To avoid the complexity of elliptic curve crypto (as I'm unfamiliar with its algorithm); let assume you're doing temp = (a * b) % M; result = (temp * c) % M, and you're thinking about just doing result = (a * b * c) % M instead.
Let's also assume that you're doing this a lot with the same modulo M; so you've precomputed "multiples of M" lookup tables, so that your modulo code can use the table to find the highest multiple of "M shifted left by N" that is not greater than the dividend and subtract it from dividend, and repeat that with decreasing values of N until you're left with the quotient.
If your lookup table has 256 entries, the dividend is 4096 bits and the divisor is 2048 bits; then you'd reduce the size of the dividend by 8 bits per iteration, so dividend would become smaller than the divisor (and you'd find the quotient) after no more than 256 "search and subtract" operations.
For multiplication; it's almost purely "multiply and add digits" for each pair of digits. E.g. using uint64_t as a digit, multiplying 2048 bit numbers is multiplying 32 digit numbers and involves 32 * 32 = 1024 of those "multiply and add digits" operations.
Now we can make comparisons. Specifically, assuming a, b, c, M are 2048-bit numbers:
a) the original temp = (a * b) % M; result = (temp * c) % M would be 1024 "multiply and add", then 256 "search and subtract", then 1024 "multiply and add", then 256 "search and subtract". For totals it'd be 2048 "multiply and add" and 512 "search and subtract".
b) the proposed result = (a * b * c) % M would be 1024 "multiply and add", then would be 2048 "multiply and add" (as the result of a*b will be a "twice as big" 4096-bit number), then 512 "search and subtract" (as a*b*c will be twice as big as a*b). For totals it'd be 3072 "multiply and add" and 512 "search and subtract".
In other words; (assuming lots of assumptions) the proposed result = (a * b * c) % M would be worse, with 50% more "multiply and add" and the exact same "search and subtract".
Of course none of this (the operations you need for elliptic curve crypto, the sizes of your variables, etc) can be assumed to apply for your specific case.
I was wondering if there was a way to determine the optimal conditions/frequency which should trigger a modulo operation in the calculations.
Yes; the way to determine the optimal conditions/frequency is to do similar to what I did above - determine the true costs (in terms of lower level operations, like my "search and subtract" and "multiply and add") and compare them.
In general (regardless of how modulo is implemented, etc) I'd expect you'll find that doing modulo as often as possible is the fastest option (as it reduces the cost of multiplications and also reduces the cost of later/final modulo) for all cases don't involve addition or subtraction, and that don't fit in simple integers.
If M is a constant, then an alternative for modulo is to multiply by the logical inverse of M. Looking at Polk's comment about 256 bits being a common case, then assuming M is polynomial of degree 256 with 1 bit coefficients, then define the inverse of M to be x^512 / M, which results in a 256 bit "inverse". Name this inverse to be I. Then for a multiply modulo M:
C = A * B ; 512 bit product
Q = (upper 256 bits of C * I)>>256 ; Q = C / M = 256 bit quotient
P = M * Q ; 512 bit product
R = lower 256 bits of (C xor P) ; (A * B)% M
So this require 3 extended precision multiplies and one xor.
If the processor for this code has a carryless multiply, such as X86 PCLMULQDQ, which multiplies two 64 bit operands to produce a 128 bit result, then that could be used as the basis for an extended precision multiply. A basic implementation would need 16 multiplies for a 256 bit by 256 bit multiply to produce a 512 bit product. This could be improved using somthing like Karatsuba:
https://en.wikipedia.org/wiki/Karatsuba_algorithm
but on currernt X86, PCLMULQDQ is fast, taking 1 to 3 cycles, so the main issue would be loading the data into the XMM registers, and I'm not sure Karatsuba would save much time.
optimal conditions/frequency which should trigger a modulo operation in the calculations
Standard practice is to replace all actual modulo operations with something else. So the frequency is never. There are different ways to accomplish that:
Choose the modulus to be a Mersenne prime or pseudo-Mersenne prime. There is a large repertoire of mathematical tricks to implement arithmetic modulo a (pseudo-)Mersenne prime efficiently, without doing any actual modulo operations. In the context of elliptic curves, the prime-modulus NIST curves are chosen this way and for this reason.
Use Barrett reduction. This has the same effect as a real modulo operation, but relies on some precomputation and a precondition on the range of the input to be able to reduce the cost of a modulo-like operation to the cost to a couple of multiplications (plus some supporting operations). Also applicable to polynomial fields.
Do arithmetic in Montgomery form.
Additionally, and perhaps more in the spirit of your question, a common technique is to do various additions without reducing every time (addition does not significantly change the size of a number). It takes a lot of additions before you need an extra limb in your integers, so a lot of them can be done before it starts to make sense to reduce. For multiplications, unless it's by a small constant it almost always makes sense to reduce immediately afterwards to prevent the numbers from getting much physically larger than they need to be (which would be especially bad if the result was fed into another multiplication).
Another technique especially associated with Barrett reductions is to work, most of the time, in a slightly larger range than [0 .. N), eg [0 .. 2N). This enables skipping the conditional subtraction that Barrett reduction needs in order to fully reduce to the range [0 .. N), while still using the most important part, the reduction from the range [0 .. N²) to the range [0 .. 2N).
I want to know which of these functions is easier for CPU to calculate/run. I was told that direct multiplication (e.g. 4x3) is more difficult for CPU to calculate than a series of summation (e.g. 4+4+4). Well the first one has direct multiplication, but the second one has a for loop.
Algorithm 1
The first one is like x*y:
int multi_1(int x, int y)
{
return x * y;
}
Algorithm 2
The second one is like x+x+x+...+x (as much as y):
int multi_2(int num1, int num2)
{
int sum=0;
for(int i=0; i<num2; i++)
{
sum += num1;
}
return sum;
}
Please don't respond with "Don't try to do micro-optimization" or something similar. How can I evaluate which of these codes run better/faster? Does C language automatically convert direct multiplication to summation?
You can generally expect the multiplication operator * to be implemented as efficiently as possible. Beating it with a custom multiplication algorithm is highly unlikely. If for any reason multi_2 is faster than multi_1 for all but some edge cases, consider writing a bug report against your compiler vendor.
On modern (i.e. made in this century) machines, multiplications by arbitrary integers are extremely fast and takes four cycles at most, which is faster than initializing the loop in multi_2.
The more "high level" your code is, the more optimization paths your compiler will be able to use. So, I'd say that code #1 will have the most chances to produce a fast and optimized code.
In fact, for a simple CPU architecture that doesn't support direct multiplication operations, but does support addition and shifts, the second algorithm won't be used at all. The usual procedure is something similar to the following code:
unsigned int mult_3 (unsigned int x, unsigned int y)
{
unsigned int res = 0;
while (x)
{
res += (x&1)? y : 0;
x>>=1;
y<<=1;
}
return res;
}
Typical modern CPUs can do multiplication in hardware, often at the same speed as addition. So clearly #1 is better.
Even if multiplication is not available and you are stuck with addition there are algorithms much faster than #2.
You were misinformed. Multiplication is not "more difficult" than repeated addition. Multipliers are built-into in the ALU (Arithmetic & Logical Unit) of modern CPU, and they work in constant time. On the opposite, repeated additions take time proportional to the value of one of the operands, which could be as large a one billion !
Actually, multiplies rarely performed by straight additions; when you have to implement them in software, you do it by repeated shifts, using a method similar to duplation, known of the Ancient Aegyptiens.
This depends on the architecture you run it on, as well as the compiler and the values for x and y.
If x and y are small, the second version might be faster. However, when x and y are very large numbers, the second version will certainly be much slower.
The only way to really find out is to measure the running time of your code, for example like this: https://stackoverflow.com/a/9085330/369009
Since you're dealing with int values, the multiplication operator (*) will be far more efficient. C will compile into the CPU-specific assembly language, which will have a multiplication instruction (e.g., x86's mul/imul). Virtually all modern CPUs can multiply integers within a few clock cycles. It doesn't get much faster than that. Years ago (and on some relatively uncommon embedded CPUs) it used to be the case that multiplication took more clock cycles than addition, but even then, the additional jump instructions to loop would result in more cycles being consumed, even if only looping once or twice.
The C language does not require that multiplications by integers be converted into series of additions. It permits implementations to do that, I suppose, but I would be surprised to find an implementation that did so, at least in the general context you present.
Note also that in your case #2 you have replaced one multiplication operation with not just num2 addition operations, but with at least 2 * num2 additions, num2 comparisons, and 2 * num2 storage operations. The storage operations probably end up being approximately free, as the values likely end up living in CPU registers, but they don't have to do.
Overall, I would expect alternative #1 to be much faster, but it is always best to answer performance questions by testing. You will see the largest difference for large values of num2. For instance, try with num1 == 1 and num2 == INT_MAX.
I'd like to start out by saying this isn't about optimizations so please refrain from dragging this topic down that path. My purpose for using fixed point arithmetic is because I want to control the precision of my calculations without using floating point.
With that being said let's move on. I wanted to have 17 bits for range and 15 bits for the fractional part. The extra bit is for the signed value. Here are some macros below.
const int scl = 18;
#define Double2Fix(x) ((x) * (double)(1 << scl))
#define Float2Fix(x) ((x) * (float)(1 << scl))
#define Fix2Double(x) ((double)(x) / (1 << scl))
#define Fix2Float(x) ((float)(x) / (1 << scl))
Addition and subtraction are fairly straight forward but things gets a bit tricky with mul and div.
I've seen two different ways to handle these two types of operations.
1) if I am using 32 bits then use a temp 64bit variable to store intermediate multiplication steps then scale at the end.
2) right in the multiplication step scale both variables to a lesser bit range before multiplication. For example if you have a 32 bit register with 16 bits for the whole number you could shift like this:
(((a)>>8)*((b)>>6) >> 2) or some combination that makes sense for you app.
It seems to me that if you design your fixed point math around 32 bits it might be impractical to always depend on having a 64bit variable able to store your intermediate values but on the other hand shifting to a lower scale will seriously reduce your range and precision.
questions
Since i'd like to avoid trying to force the cpu to try to create a 64bit type in the middle of my calculations is the shifting to lower bit values the only other alternative?
Also i've notice
int b = Double2Fix(9.1234567890);
printf("double shift:%f\n",Fix2Double(b));
int c = Float2Fix(9.1234567890);
printf("float shift:%f\n",Fix2Float(c));
double shift:9.123444
float shift:9.123444
Is that precision loss just a part of using fixed point numbers?
Since i'd like to avoid trying to force the cpu to try to create a 64bit type in the middle of my calculations is the shifting to lower bit values the only other alternative?
You have to work with the hardware capabilities, and the only available operations you'll find are:
Multiply N x N => low N bits (native C multiplication)
Multiply N x N => high N bits (the C language has no operator for this)
Multiply N x N => all 2N bits (cast to wider type, then multiply)
If the instruction set has #3, and the CPU implements it efficiently, then there's no need to worry about the extra-wide result it produces. For x86, you can pretty much take these as a given. Anyway, you said this wasn't an optimization question :) .
Sticking to just #1, you'll need to break the operands into pieces of (N/2) bits and do long multiplication, which is likely to generate more work. There are still cases where it's the right thing to do, for instance implementing #3 (software extended arithmetic) on a CPU that doesn't have it or #2.
Is that precision loss just a part of using fixed point numbers?
log2( 9.1234567890 – 9.123444 ) = –16.25, and you used 16 bits of precision, so yep, that's very typical.
Which operation takes more CPU clocks, modulo or comparison?
Will this code take more time:
for(j = i; j <= 10; j++)
{
if(j == 10) printf("0");
else printf("%d", j);
}
or this
for(j = i; j <= 10; j++)
printf("%d", j % 10);
and why?
If measured in CPU cycles, probably the modulo operation takes more cycles; this may depend on CPU. However, CPU cycles aren't a great way to measure performance with modern processors which run more than one instruction at once (pipelining), have multiple layers of cache etc. In this case, putting an additional test in will mean an additional branch, which may be more significant in terms of timing (i.e. affect the instruction pipeline). The only way to know for sure is to compile it optimised, and time it.
I know your example is meant to be just that, an example, but this also illustrates premature optimisation. The call to printf will take orders of magnitude more time than the modulo or compare. If you want to optimise your example, you would write something like:
printf ("1234567890");
Comparison is a simple operation and is usually faster (the CPU can use logical operators on bits).
If you perform a modulo to a number that is not a power of two, the CPU has to perform a division, that can be a quite expensive operation (of course it depends on the size of the numbers you are using).
Speaking of cpu clocks, a comparison can be done in parallel, since you can just use a xor operation, so doing x==10 or x==200000 will take the same small amount of cpu clocks. With a division this is not possible and a bigger number will require more time.
In terms of Assembly, a modulo operation implies a "never so easy" multiplication. See some algorithms.
A branch operation actually is the second fastest instruction (jump is the first) as it only takes at most one substraction to do the comparison.
I am curious to understand the logic behind the mod operation since I understand that bit-shifting operations can be performed to do different things such as bit shifting to multiply.
One way I can see it being done is by a recursive algorithm that keeps dividing until you cannot divide anymore, but this does not seem efficient.
Any ideas will be helpful. Thanks in advance!
The quick version is: Depends on hardware, the optimizer, if it's division by a constant or not (pdf), if there's exceptions to be checked for (e.g. modulo by 0), if and how negative numbers are handled (this is a scary question for C++), etc...
R gave a nice, concise answer for unsigned integers, but it's difficult to understand unless you're well versed with C.
The crux of the technique illuminated by R is to strip away multiples of q until there's no more multiples of q left. We could naively do this with a simple loop:
while (p >= q) p -= q; // One liner, woohoo!
The code may be short, but for large values of p and small values of q this might take a very long time.
Better than stripping away one q at a time would be to strip away many q's at a time. Note that we actually want to strip away as many q's as possible -- that is, floor(p/q) many q's... And indeed, that's a valid technique. For unsigned integers, one would expect that p % q == p - (p / q) * q. (Note that unsigned integer division rounds down.)
But this almost feels like cheating because division and remainder operations are so intimately related. (In fact, often if hardware natively supports division, it supports a divide-and-compute-remainder operation because they're so strongly related.)
Assuming we've no access to division, how shall we find a multiple of q greater than 1 to strip away? In hardware, fixed shift operations are cheap (if not practically free) and conceptually represent multiplication by a non-negative power of two. For example, shifting a bit string left by 3 is equivalent to multiplying by 8 (that is, 2^3), e.g. 5 decimal is equivalent to '101' binary. Shift '101' in binary by adding three zeroes on the right (giving '101000') and the result is 50 in decimal -- five times eight.
Likewise, shift operations are very cheap as software operations and you'll struggle to find a controller that doesn't support them and quickly. (Some architectures such as ARM can even combine shifts with other instructions to make them 'free' a good deal of the time.)
ARMed (couldn't resist) with these shift operations, we can proceed as follows:
Find out the largest power of two we can multiply q by and still be less than p.
Working from the largest power of two to the smallest, multiply q by each power of two and if it's less than what's left of p subtract it from what's left of p.
Whatever you've got left is the remainder.
Why does this work? Because in the end you'll find that all the subtracted powers of two actually sum to floor(p / q)! Don't take my word for it, similar knowledge has been known for a very long time.
Breaking apart R's answer:
#define HI (-1U-(-1U/2))
This effectively gives you an unsigned integer with only the highest value bit set.
unsigned i;
for (i=0; !(HI & (q<<i)); i++);
This line actually finds the highest power of two q can be multiplied before overflowing an unsigned integer. This isn't strictly necessary, but it doesn't change the results other than increasing the amount of execution time required.
In case you're not familiar with the C-isms in this line:
(q<<i) is a left bit shift by i. Recall this is equivalent to multiplying by 2^i.
HI & (q<<i) performs a bitwise-AND. Since HI only has its top bit populated this will only result in a non-zero value when (q<<i) is large enough to cause the top bit to be non-zero. One more shift over to the left and there'd be an integer overflow.
!(HI & (q<<i)) is 'true' when (HI & (q<<i)) is zero and 'false' otherwise.
do { if (p >= (q<<i)) p -= (q<<i); } while (i--);
This is a simple decreasing loop do { .... } while (i--);. Note that post-decrementing is used on i so the loop executes, then it checks to see if i is not zero, then it subtracts one from i, and then if its earlier check resulted in true it continues. This has the property that the loop executes its last time when i is 0. This is important because we may need to strip away an unmultiplied copy of q.
if (p >= (q<<i)) checks if the 2^i * q is less than or equal to p. If it is, p -= (q<<i) strips it away.
The remainder is left.
While most C implementations run on hardware that has a division instruction, the remainder operation can be performed roughly like this, for computing p%q, assuming unsigned values:
#define HI (-1U-(-1U/2))
unsigned i;
for (i=0; !(HI & (q<<i)); i++);
do { if (p >= (q<<i)) p -= (q<<i); } while (i--);
The resulting remainder is in p.
In addition to a hardware instruction and implementation using shifts, as R.. suggests, there's also reciprocal multiplication.
This technique can be used when the right-hand side of % is a constant, known at compile time.
Reciprocal multiplication is used to implement division, but using it for % is easy, based on the formula a%b == a-(a/b)*b.
Depending on the smarts of the optimizer, there is a shortcut for modulo base 2. For example, a % 32 can be implemented as a & 31. In general, a % (2^N) == a & (2^N -1). This is lightning fast compared to division. Most dividers (ever hardware) require at least 1 cycle for each bit of the result to calculate, while logic AND is just a few cycle operation (in the pipeline).
EDIT: this only works if a is unsigned !