My question is about the performance (execution time / benchmark) of binary operators, can we say by example that performing a + b is faster than a % b.
My question is not limited to only those operators (+ and %) but also:
Additive operators (+ and -)
Multiplicative operators (*, /, %...)
Comparative operators (<, >, <=...)
BITWISE and shift operators (<<, <<<...)
...
A couple of additions to FUZxxl's answer:
on modern Intels and AMDs both + and * have roughly the same (very fast) throughput, but * usually has higher latency. Throughput is how often you can issue a command, and latency is how long you'd have to wait before the results are ready (while the CPU executes something else out of order)
some RISC CPUs have pretty expensive shifts (namely, the ones used on Xbox360 and PS3)
they "fixed" the division some time ago, and it's no longer as horribly slow as it used to be. I think FP division is about 16 clocks now (integer might actually be slower)
while comparisons are all fast per se, conditional jumps can be very slow if they are mispredicted (since the CPU will have to dump everything that it would have predictively executed ahead). Whether the CPU manages to predict the results of a comparison depends on how random they are (when the same check is executed many times). However, even if they tend to follow a pattern, each jump uses up a branch prediction slot, so it may evict another jump from it, and that other branch would suffer the misprediction penalty instead. In other words, comparisons can be pretty expensive.
The performance of these operators depends on the platform. If an operation expresses with a “slow” operator can be implemented with a “fast” operator, you can generally expect the compiler to pick this up and emit fast code. Do not use “faster” operands just because someone told you they are faster without benchmarking.
Generally though, operators can be classified in speed roughly according to the following scale:
Zero cycles: Addition immediately preceding dereferencing such as in an array expression a[b] is usually free. Unary + is free, too.
One cycle: For integer operands: binary +, -, <<, >>, &, |, ^, unary -, ~, casts between integer types or pointers, if the result is not used numerically: !, <, >, <=, >=, !=, &&, ||
Three to four cycles: binary * on integer operands, on floating point operands: binary +, -
20 cycles (?): integer binary /, %
50 cycles (?): floating point /, fmod
Your mileage may vary, do not rely on this table, benchmark when in doubt.
Related
In my program I have a great presence of the operation n % 10. I know that the module operation can be done much faster when we have n% m where m is the power of 2, since it can be replaced by n & (m-1 ), however Is there any faster way to calculate modulus if the operand is 10?
In my case n is a uint8_t in some cases and in other cases n is an uint32_t.
Because most modern processors can do multiplication much, much faster than division, it is often possible to speed up division and modulus operations where the dividend is a known small constant by replacing the division with one or two multiplications and a few other fast operations (such as shift and addition).
To do so requires computing at compile-time some magic numbers dependent on the dividend; fortunately most modern compilers know how to do this so you don't need to do anything to take advantage. Just let your compiler do the heavy lifting for you, as #chux suggests in an excellent answer.
You can help the compiler by using unsigned types; for some dividends, signed division and modulus are harder to replace.
The basic outline of the optimisation of modulus looks like this:
If you had exact arithmetic, you could replace x % p with p * ((x * (1/p)) % 1). For constant p, 1/p can be precomputed at compile time. The %1 operation simply consists of discarding the fraction part, which is just a right-shift. So that replaces a division with two multiplies, and if p only has a few bits set, the multiply by p might be further optimised into a few left-shifts.
We can do that computation with fixed-point arithmetic, taking advantage of the fact that most processors produce a double-sized result for integer multiplication. Since we don't care about the integer part of the inner multiplication and we know that the result of the outer multiplication must be less than p, we only need to reserve ceil(log2 p) bits for the integer part of the computation, leaving the rest of the bits for the fraction. And that might give us enough precision to correctly handle the possible range of values of x, particularly if x has a limited range (eg. uint8_t or even uint16_t). The key is finding a position of the fixed-point which minimises the error in representation of 1/p.
For many small values of p, that works. For others, there is an alternative (but slower) solution which involves estimating q = x/p using multiplication by the inverse, and then computing x - q * p. If the estimate of q can be guaranteed to be either correct or off by one in a known direction, we only need to correct the final computation by conditionally adding or subtracting p; that can be accomplished without a branch on many modern CPUs. (The direction of the error is known because it will depend only on whether the approximation we chose for the inverse of the dividend was too small or too big.)
In the very specific case of x % 10 where x is a uint_8, you might be able to do better than the above using a 256-byte lookup table. That would only be worthwhile if you were doing the modulus operation in a tight loop over a large number of values, and even then you'd want to profile carefully to verify that it is an improvement.
I doubt whether that's the best expenditure of your time; there are probably much more fruitful optimisation opportunities in your application.
however Is there any faster way to calculate modulus if the operand is 10?
With a good compiler, no. The compiler would have already emitted good code. You can explore different optimization settings with the compiler.
OTOH, if you know of some restrictions that the compiler cannot assume with n % 10, like values are always positive or of a sub-range, you might be able to out optimize the compiler.
Such micro-optimisation is usually not efficient use of programmer's time.
In the "Introduction" section of K&R C (2E) there is this paragraph:
C, like any other language, has its blemishes. Some of the operators have the wrong precedence; ...
Which operators are these? How are their precedence wrong?
Is this one of these cases?
Yes, the situation discussed in the message you link to is the primary gripe with the precedence of operators in C.
Historically, C developed without &&. To perform a logical AND operation, people would use the bitwise AND, so a==b AND c==d would be expressed with a==b & c==d. To facilitate this, == had higher precedence than &. Although && was added to the language later, & was stuck with its precedence below ==.
In general, people might like to write expressions such as (x&y) == 1 much more often than x & (y==1). So it would be nicer if & had higher precedence than ==. Hence people are dissatisfied with this aspect of C operator precedence.
This applies generally to &, ^, and | having lower precedence than ==, !=, <, >, <=, and >=.
There is a clear rule of precedence that is incontrovertible.
The rule is so clear that for a strongly typed system (think Pascal) the wrong precedence would give clear unambiguous syntax errors at compile time. The problem with C is that since its type system is laissez faire the errors turn out to be more logical errors resulting in bugs rather than errors catch-able at compile time.
The Rule
Let ○ □ be two operators with type
○ : α × α → β
□ : β × β → γ
and α and γ are distinct types.
Then
x ○ y □ z can only mean (x ○ y) □ z, with type assignment
x: α, y : α, z : β
whereas x ○ (y □ z) would be a type error because ○ can only take an α whereas the right sub-expression can only produce a γ which is not α
Now lets
Apply this to C
For the most part C gets it right
(==) : number × number → boolean
(&&) : boolean × boolean → boolean
so && should be below == and it is so
Likewise
(+) : number × number → number
(==) : number × number → boolean
and so (+) must be above (==) which is once again correct
However in the case of bitwise operators
the &/| of two bit-patterns aka numbers produce a number
ie
(&), (|) : number × number → number
(==) : number × number → boolean
And so a typical mask query eg. x & 0x777 == 0x777
can only make sense if (&) is treated as an arithmetic operator ie above (==)
C puts it below which in light of the above type rules is wrong
Of course Ive expressed the above in terms of math/type-inference
In more pragmatic C terms x & 0x777 == 0x777 naturally groups as
x & (0x777 == 0x777) (in the absence of explicit parenthesis)
When can such a grouping have a legitimate use?
I (personally) dont believe there is any
IOW Dennis Ritchie's informal statement that these precedences are wrong can be given a more formal justification
Wrong may sound a bit too harsh. Normal people generally only care about the basic operators like +-*/^ and if those don't work like how they write in math, that may be called wrong. Fortunately those are "in order" in C (except power operator which doesn't exist)
However there are some other operators that might not work as many people expect. For example the bitwise operators have lower precedence than comparison operators, which was already mentioned by Eric Postpischil. That's less convenient but still not quite "wrong" because there wasn't any defined standard for them before. They've just been invented in the last century during the advent of computers
Another example is the shift operators << >> which have lower precedence than +-. Shifting is thought as multiplication and division, so people may expect that it should be at a higher level than +-. Writing x << a + b may make many people think that it's x*2a + b until they look at the precedence table. Besides (x << 2) + (x << 4) + (y << 6) is also less convenient than simple additions without parentheses. Golang is one of the languages that fixed this by putting <</>> at a higher precedence than + and -
In other languages there are many real examples of "wrong" precedence
One example is T-SQL where -100/-100*10 = 0
PHP with the wrong associativity of ternary operators
Excel with wrong precedence (lower than unary minus) and associativity (left-to-right instead of right-to-left) of ^:
According to Excel, 4^3^2 = (4^3)^2. Is this really the standard mathematical convention for the order of exponentiation?
Why does =-x^2+x for x=3 in Excel result in 12 instead of -6?
Why is it that Microsoft Excel says that 8^(-1^(-8^7))) = 8 instead of 1/8?
It depends which precedence convention is considered "correct". There's no law of physics (or of the land) requiring precedence to be a certain way; it's evolved through practice over time.
In mathematics, operator precedence is usually taken as "BODMAS" (Brackets, Order, Division, Multiplication, Addition, Subtraction). Brackets come first and Subtraction comes last.Ordering Mathematical Operations | BODMAS Order of operations
Operator precedence in programming requires more rules as there are more operators, but you can distil out how it compares to BODMAS.
The ANSI C precedence scheme is pictured here:
As you can see, Unary Addition and Subtraction are at level 2 - ABOVE Multiplication and Division in level 3. This can be confusing to a mathematician on a superficial reading, as can precedence around suffix/postfix increment and decrement.
To that extent, it is ALWAYS worth considering adding brackets in your mathematical code - even where syntactically unnecessary - to make sure to a HUMAN reader that your intention is clear. You lose nothing by doing it (although you might get flamed a bit by an uptight code reviewer, in which you can flame back about coding risk management). You might lose readability, but intention is always more important when debugging.
And yes, the link you provide is a good example. Countless expensive production errors have resulted from this.
While at least from a hand wave point of view I believe I know what an "arithmetic operator" is, I'm looking for a formal definition. I've examined the C17 standard document and I can't find such a definition, although it uses the term "arithmetic operator" in several places.
The closest I've been able to find is in the index of C17, where page numbers are provided for additive, bitwise, increment and decrement, multiplicative, shift, and unary under the common heading "arithmetic operators". I've looked online at various sources and the most common thing I've found only says that binary +, -, *, /, and % are the C arithmetic operators. Some also throw in ++ and --.
I'm pretty sure I'm simply missing something since I do find the standard quite daunting. However, I also find the various online sources somewhat dubious since they often seem to differ.
Thanks!
Update: Since some readers objected to my references to both C and C++ in the same posting, I've removed the references to C++ in the modified version above and will do an entirely separate posting for it later if I can first get the issue resolved for C.
The C standard does not explicitly define the term arithmetic operator, though it defines what an arithmetic operand is. If you read carefully, nothing in C is defined by using the term arithmetic operator, it exists only as a grouping in the index and in a title of one section. The term arithmetic operator by itself does not appear in any paragraph.
From the index, we indeed can get a list
arithmetic operators
additive, 6.2.6.2, 6.5.6, G.5.2
bitwise, 6.2.6.2, 6.5.3.3, 6.5.10, 6.5.11, 6.5.12
increment and decrement, 6.5.2.4, 6.5.3.1
multiplicative, 6.2.6.2, 6.5.5, G.5.1
shift, 6.2.6.2, 6.5.7
unary, 6.5.3.3
From this we could formulate that the arithmetic operators are those that require the operands to be arithmetic operands, i.e. of an arithmetic type (except in special cases such as pointer addition, subtraction), i.e.
additive + and -
bitwise &, | and ^
increment and decrement ++ and --
multiplicative *, / and %
shift << and >>
unary -, ~ and +. It is debatable whether ! is an arithmetic operator or not, even though it is listed in section 6.5.3.3.
Another notable thing about these operators are that the operands might undergo usual arithmetic conversions.
Arithmatic operators are operators used to perform mathematical operations like addition, substraction, multiplication and division. As simple as that.
ex: a+b = c
The Cortex MCU I'm using doesn't have support for floating point divisions in hardware. The GCC compiler solves this by doing them software based, but warns that it can be very slow.
Now I was wondering how I could avoid them altogether. For example, I could blow up the value factor 10000 (integer multiplication), and divide by another large factor (integer division), and get the exact same result.
But would these two operations be actually faster in general than a single floating point operation? For example, does it make sense to replace:
int result = 100 * 0.95f
by
int result = (100 * 9500) / 10000
to get 95% ?
It's better to get rid of division altogether if you can. This is relatively easy if the divisor is a compile-time constant. Typically you arrange things so that any division operation can be replaced by a bitwise shift. So for your example:
unsigned int x = 100;
unsigned int y = (x * (unsigned int)(0.95 * 1024)) >> 10; // y = x * 0.95
Obviously you need to be very aware of the range of x so that you can avoid overflow in the intermediate result.
And as always, remember that premature optimisation is evil - only use fixed point optimisations such as this if you have identified a performance bottleneck.
Yes the integer expression will be faster -the integer divide and multiply instructions are single machine instructions whereas the floating point operations will be either function calls (for direct software floating point) or exception handlers (for FPU instruction emulation) - either way, each operation will comprise multiple instructions.
However, while for simple operations, integer expressions and ad-hoc fixed-point (scaled integer) expressions may be adequate, for math intensive applications involving trigonometry functions and logarithms etc. in can become complex. For that you might employ a common fixed-point representation and library. This is easiest in C++ rather than C, as exemplified by Anthony Williams' fixed point library, where due to extensive operator and function overloading, in most cases you can simply replace the float or double keywords with fixed and existing expressions and algorithms will work with comparable performance to to an FPU equiped ARM for many operations. If you are not comfortable using C++, the rest of your code need not use any C++ specific features, and can essentially be C code compiled as C++.
As far as I know, C uses lazy calculation for logical expressions, e. g. in expression
f(x) && g(x)
g(x) will not be called if f(x) is false.
But what about arithmetic expressions like
f(x)*g(x)
Does g(x) will be called if f(x) is zero?
Yes, arithmetic operations are eager, not lazy.
So in f(x)*g(x) both f and g are always called (pedantically the compiler is transforming that into some A-normal form and could even avoid some calls if that is not observable), but there is no guarantee about the order of calling f before or after g. And evaluating x*1/x or y*1/x is undefined behavior when x is 0.
This is not true in Haskell AFAIU
Yes, g(x) will still be called.
Generally, it would be a quite slow to conditionally elide the evaluation of the right-hand side just because the left-hand side is zero. Perhaps not in the case where the right-hand side is an expensive function call, but the compiler wouldn't presume to know that.
It's called "Short Circuit" instead of lazy. And, at least as far as the standard cares, yes -- i.e., it doesn't specify short-circuit evaluation for *.
A compiler might be able to do short-circuit evaluation if it can be certain g() has no side effects, but only under the as-if rule (i.e., it can do so only by finding that there's no externally observable difference, not because the standard gives it any direct permission to do so).
In case of logical operators && and || order of evaluation bound to take place from left to right and short circuiting takes place.
There is a sequence point between evaluation of the left and right operands of the && (logical AND), || (logical OR) (as part of short-circuit evaluation). For example, in the expression *p++ != 0 && *q++ != 0, all side effects of the sub-expression *p++ != 0 are completed before any attempt to access q, but not in case of arithmetic operators .
While that optimization would be possible, there are a few arguments against it:
You might pay more for the optimization than you get back from it: Unlike with logical operators, the optimization is likely to be beneficial in only a small percentage of all cases with arithmetic operators, but at the same time requires an additional check for 0 for every operation.
Because boolean truth values only have two possible values, there is a theoretical 50 % chance (1 ÷ 2) with short-circuiting boolean expressions that the second operand will not have to be evaluated. (This assumes uniform distribution, which is perhaps not realistic, but bear with me.) That is, you are likely to profit from the optimization in a relatively large percentage of cases.
Contrast this with integral numbers, where 0 is only one out of millions of possible values. The probability that the first operand is 0 is much lower: 1 ÷ 232 (for 32-bit integers, again assuming uniform distribution). Even if 0 were in fact somewhat more probable to occur than that (i.e. with a non-uniform distribution), it's still unlikely that we're dealing with the same order of magnitude as with truth values.
Floating point math further aggravates that issue. Here you need to deal with the possibility of rounding errors and denormalization. The probability that some calculation yields exactly 0 is likely to be even lower than with integral numbers.
Therefore the optimization is relatively unlikely to result in the remaining operand not being evaluated. But it will result in an added check for zero, 100 % of the time!
If you want evaluation rules to remain reasonably consistent, you would have to redefine short-circuit evaluation order of && and ||: Division has one important corner case, namely division by 0: Even if the first operand is 0, the quotient is not necessarily 0. Divison by 0 is to be treated as an error (except perhaps in IEEE floating-point math); therefore, you always have to evaluate the second operand in order to determine whether the calculation is valid.
There is one alternative optimization for /: division by 1. In that case, you wouldn't have to divide at all, but simply return the first operand. / would therefore be better optimised by starting with the second operand (divisor).
Now, unless you want &&, ||, and * to start evaluation with the first operand, but / to start with the second (which might seem unintuitive), you would have to generally re-define short-circuiting behavior such that the second operand always gets evaluated first, which would be a departure from the status quo.
This is not per se a problem, but might break a lot of existing code if the C language were thus changed.
The optimization might break "compatibility" with C++ code where operators can be overloaded. Would the optimizations still apply to overloaded * and / operators? Or would there have to be two different forms of these operators, one short-circuiting, and one with eager evaluation?
Again, this is not a deficiency inherent in short-circuit arithmetic operators, but an issue that would arise if such short-circuiting were introduced into the C (and C++) language as a breaking change.