Related
I am trying to implement a VERY efficient 2x2 matrix multiplication in C code for operation in an ARM Cortex-M4. The function accepts 3 pointers to 2x2 arrays, 2 for the inputs to be multiplied and an output buffer passed by the using function. Here is what I have so far...
static inline void multiply_2x2_2x2(int16_t a[2][2], int16_t b[2][2], int32_t c[2][2])
{
int32_t a00a01, a10a11, b00b01, b01b11;
a00a01 = a[0][0] | a[0][1]<<16;
b00b10 = b[0][0] | b[1][0]<<16;
b01b11 = b[0][1] | b[1][1]<<16;
c[0][0] = __SMUAD(a00a01, b00b10);
c[0][1] = __SMUAD(a00a01, b01b11);
a10a11 = a[1][0] | a[1][1]<<16;
c[1][0] = __SMUAD(a10a11, b00b10);
c[1][1] = __SMUAD(a10a11, b01b11);
}
Basically, my strategy is to to use the ARM Cortex-M4 __SMUAD() function to do the actual multiply accumulates. But this requires me to build the inputs a00a01, a10a11, b00b10, and b01b11 ahead of time. My question is, given that the C array should be a continuous in memory, is there a more efficient wat to pass the data into the functions directly without the intermediate variables? Secondary question, am I overthinking this and I should just let the compiler do its job as it is smarter than I am? I tend to do that a lot.
Thanks!
You could break the strict aliasing rules and load the matrix row directly into the 32-bit register, using a int16_t* to int32_t* typecast. An expression such as a00a01 = a[0][0] | a[0][1]<<16 just takes some consecutive bits from RAM and arranges them into other consecutive bits in registers. Consult your compiler manual for the flag to disable its strict aliasing assumptions, and make the cast safely usable.
You could also perhaps avoid transposing matrix columns into registers, by generating b in transposed format in the first place.
The best way to learn about the compiler, and get a sense of the cases for which it's smarter than you, is to disassemble its results and compare the instruction sequence to your intentions.
The first main concern is that some_signed_int << 16 invokes undefined behavior for negative numbers. So you have bugs all over. And then bitwise OR of two int16_t where either is negative does not necessarily form a valid int32_t either. Do you actually need the sign or can you drop it?
ARM examples use unsigned int, which in turn supposedly contains 2x int16_t in raw binary form. This is what you actually want too.
Also it would seem that it shouldn't matter for SMUAD which 16 bit word you place where. So the a[0][0] | a[0][1]<<16; just serves to needlessly swap data around in memory. It will confuse the compiler which can't optimize such code well. Sure, shifts etc are always very fast, but this is pointless overhead.
(As someone noted, this whole thing is likely much easier to write in pure assembler without concern of all the C type rules and undefined behavior.)
To avoid all these issues you could define your own union type:
typedef union
{
int16_t i16 [2][2];
uint32_t u32 [2];
} mat2x2_t;
u32[0] corresponds to i16[0][0] and i16[0][1]
u32[1] corresponds to i16[1][0] and i16[1][1]
C actually lets you "type pun" between these types pretty wildly (unlike C++). Unions also dodge the brittle strict aliasing rules.
The function can then become something along the lines of this pseudo code:
static uint32_t mat_mul16 (mat2x2_t a, mat2x2_t b)
{
uint32_t c0 = __SMUAD(a.u32[0], b.u32[0]);
...
}
Supposedly each such line should give 2x signed 16 multiplications as per the SMUAD instruction.
As for if this actually gives some revolutionary performance increase compared to some default MUL, I kind of doubt it. Disassemble and count CPU ticks.
am I overthinking this and I should just let the compiler do its job as it is smarter than I am?
Most likely :) The old rule of thumb: benchmark and then only manually optimize at the point when you've actually found a performance bottleneck.
I want to know which of these functions is easier for CPU to calculate/run. I was told that direct multiplication (e.g. 4x3) is more difficult for CPU to calculate than a series of summation (e.g. 4+4+4). Well the first one has direct multiplication, but the second one has a for loop.
Algorithm 1
The first one is like x*y:
int multi_1(int x, int y)
{
return x * y;
}
Algorithm 2
The second one is like x+x+x+...+x (as much as y):
int multi_2(int num1, int num2)
{
int sum=0;
for(int i=0; i<num2; i++)
{
sum += num1;
}
return sum;
}
Please don't respond with "Don't try to do micro-optimization" or something similar. How can I evaluate which of these codes run better/faster? Does C language automatically convert direct multiplication to summation?
You can generally expect the multiplication operator * to be implemented as efficiently as possible. Beating it with a custom multiplication algorithm is highly unlikely. If for any reason multi_2 is faster than multi_1 for all but some edge cases, consider writing a bug report against your compiler vendor.
On modern (i.e. made in this century) machines, multiplications by arbitrary integers are extremely fast and takes four cycles at most, which is faster than initializing the loop in multi_2.
The more "high level" your code is, the more optimization paths your compiler will be able to use. So, I'd say that code #1 will have the most chances to produce a fast and optimized code.
In fact, for a simple CPU architecture that doesn't support direct multiplication operations, but does support addition and shifts, the second algorithm won't be used at all. The usual procedure is something similar to the following code:
unsigned int mult_3 (unsigned int x, unsigned int y)
{
unsigned int res = 0;
while (x)
{
res += (x&1)? y : 0;
x>>=1;
y<<=1;
}
return res;
}
Typical modern CPUs can do multiplication in hardware, often at the same speed as addition. So clearly #1 is better.
Even if multiplication is not available and you are stuck with addition there are algorithms much faster than #2.
You were misinformed. Multiplication is not "more difficult" than repeated addition. Multipliers are built-into in the ALU (Arithmetic & Logical Unit) of modern CPU, and they work in constant time. On the opposite, repeated additions take time proportional to the value of one of the operands, which could be as large a one billion !
Actually, multiplies rarely performed by straight additions; when you have to implement them in software, you do it by repeated shifts, using a method similar to duplation, known of the Ancient Aegyptiens.
This depends on the architecture you run it on, as well as the compiler and the values for x and y.
If x and y are small, the second version might be faster. However, when x and y are very large numbers, the second version will certainly be much slower.
The only way to really find out is to measure the running time of your code, for example like this: https://stackoverflow.com/a/9085330/369009
Since you're dealing with int values, the multiplication operator (*) will be far more efficient. C will compile into the CPU-specific assembly language, which will have a multiplication instruction (e.g., x86's mul/imul). Virtually all modern CPUs can multiply integers within a few clock cycles. It doesn't get much faster than that. Years ago (and on some relatively uncommon embedded CPUs) it used to be the case that multiplication took more clock cycles than addition, but even then, the additional jump instructions to loop would result in more cycles being consumed, even if only looping once or twice.
The C language does not require that multiplications by integers be converted into series of additions. It permits implementations to do that, I suppose, but I would be surprised to find an implementation that did so, at least in the general context you present.
Note also that in your case #2 you have replaced one multiplication operation with not just num2 addition operations, but with at least 2 * num2 additions, num2 comparisons, and 2 * num2 storage operations. The storage operations probably end up being approximately free, as the values likely end up living in CPU registers, but they don't have to do.
Overall, I would expect alternative #1 to be much faster, but it is always best to answer performance questions by testing. You will see the largest difference for large values of num2. For instance, try with num1 == 1 and num2 == INT_MAX.
I know UIKit uses CGFloat because of the resolution independent coordinate system.
But every time I want to check if for example frame.origin.x is 0 it makes me feel sick:
if (theView.frame.origin.x == 0) {
// do important operation
}
Isn't CGFloat vulnerable to false positives when comparing with ==, <=, >=, <, >?
It is a floating point and they have unprecision problems: 0.0000000000041 for example.
Is Objective-C handling this internally when comparing or can it happen that a origin.x which reads as zero does not compare to 0 as true?
First of all, floating point values are not "random" in their behavior. Exact comparison can and does make sense in plenty of real-world usages. But if you're going to use floating point you need to be aware of how it works. Erring on the side of assuming floating point works like real numbers will get you code that quickly breaks. Erring on the side of assuming floating point results have large random fuzz associated with them (like most of the answers here suggest) will get you code that appears to work at first but ends up having large-magnitude errors and broken corner cases.
First of all, if you want to program with floating point, you should read this:
What Every Computer Scientist Should Know About Floating-Point Arithmetic
Yes, read all of it. If that's too much of a burden, you should use integers/fixed point for your calculations until you have time to read it. :-)
Now, with that said, the biggest issues with exact floating point comparisons come down to:
The fact that lots of values you may write in the source, or read in with scanf or strtod, do not exist as floating point values and get silently converted to the nearest approximation. This is what demon9733's answer was talking about.
The fact that many results get rounded due to not having enough precision to represent the actual result. An easy example where you can see this is adding x = 0x1fffffe and y = 1 as floats. Here, x has 24 bits of precision in the mantissa (ok) and y has just 1 bit, but when you add them, their bits are not in overlapping places, and the result would need 25 bits of precision. Instead, it gets rounded (to 0x2000000 in the default rounding mode).
The fact that many results get rounded due to needing infinitely many places for the correct value. This includes both rational results like 1/3 (which you're familiar with from decimal where it takes infinitely many places) but also 1/10 (which also takes infinitely many places in binary, since 5 is not a power of 2), as well as irrational results like the square root of anything that's not a perfect square.
Double rounding. On some systems (particularly x86), floating point expressions are evaluated in higher precision than their nominal types. This means that when one of the above types of rounding happens, you'll get two rounding steps, first a rounding of the result to the higher-precision type, then a rounding to the final type. As an example, consider what happens in decimal if you round 1.49 to an integer (1), versus what happens if you first round it to one decimal place (1.5) then round that result to an integer (2). This is actually one of the nastiest areas to deal with in floating point, since the behaviour of the compiler (especially for buggy, non-conforming compilers like GCC) is unpredictable.
Transcendental functions (trig, exp, log, etc.) are not specified to have correctly rounded results; the result is just specified to be correct within one unit in the last place of precision (usually referred to as 1ulp).
When you're writing floating point code, you need to keep in mind what you're doing with the numbers that could cause the results to be inexact, and make comparisons accordingly. Often times it will make sense to compare with an "epsilon", but that epsilon should be based on the magnitude of the numbers you are comparing, not an absolute constant. (In cases where an absolute constant epsilon would work, that's strongly indicative that fixed point, not floating point, is the right tool for the job!)
Edit: In particular, a magnitude-relative epsilon check should look something like:
if (fabs(x-y) < K * FLT_EPSILON * fabs(x+y))
Where FLT_EPSILON is the constant from float.h (replace it with DBL_EPSILON fordoubles or LDBL_EPSILON for long doubles) and K is a constant you choose such that the accumulated error of your computations is definitely bounded by K units in the last place (and if you're not sure you got the error bound calculation right, make K a few times bigger than what your calculations say it should be).
Finally, note that if you use this, some special care may be needed near zero, since FLT_EPSILON does not make sense for denormals. A quick fix would be to make it:
if (fabs(x-y) < K * FLT_EPSILON * fabs(x+y) || fabs(x-y) < FLT_MIN)
and likewise substitute DBL_MIN if using doubles.
Since 0 is exactly representable as an IEEE754 floating-point number (or using any other implementation of f-p numbers I've ever worked with) comparison with 0 is probably safe. You might get bitten, however, if your program computes a value (such as theView.frame.origin.x) which you have reason to believe ought to be 0 but which your computation cannot guarantee to be 0.
To clarify a little, a computation such as :
areal = 0.0
will (unless your language or system is broken) create a value such that (areal==0.0) returns true but another computation such as
areal = 1.386 - 2.1*(0.66)
may not.
If you can assure yourself that your computations produce values which are 0 (and not just that they produce values which ought to be 0) then you can go ahead and compare f-p values with 0. If you can't assure yourself to the required degree, best stick to the usual approach of 'toleranced equality'.
In the worst cases the careless comparison of f-p values can be extremely dangerous: think avionics, weapons-guidance, power-plant operations, vehicle navigation, almost any application in which computation meets the real world.
For Angry Birds, not so dangerous.
I want to give a bit of a different answer than the others. They are great for answering your question as stated but probably not for what you need to know or what your real problem is.
Floating point in graphics is fine! But there is almost no need to ever compare floats directly. Why would you need to do that? Graphics uses floats to define intervals. And comparing if a float is within an interval also defined by floats is always well defined and merely needs to be consistent, not accurate or precise! As long as a pixel (which is also an interval!) can be assigned that's all graphics needs.
So if you want to test if your point is outside a [0..width[ range this is just fine. Just make sure you define inclusion consistently. For example always define inside is (x>=0 && x < width). The same goes for intersection or hit tests.
However, if you are abusing a graphics coordinate as some kind of flag, like for example to see if a window is docked or not, you should not do this. Use a boolean flag that is separate from the graphics presentation layer instead.
Comparing to zero can be a safe operation, as long as the zero wasn't a calculated value (as noted in an above answer). The reason for this is that zero is a perfectly representable number in floating point.
Talking perfectly representable values, you get 24 bits of range in a power-of-two notion (single precision). So 1, 2, 4 are perfectly representable, as are .5, .25, and .125. As long as all your important bits are in 24-bits, you are golden. So 10.625 can be repsented precisely.
This is great, but will quickly fall apart under pressure. Two scenarios spring to mind:
1) When a calculation is involved. Don't trust that sqrt(3)*sqrt(3) == 3. It just won't be that way. And it probably won't be within an epsilon, as some of the other answers suggest.
2) When any non-power-of-2 (NPOT) is involved. So it may sound odd, but 0.1 is an infinite series in binary and therefore any calculation involving a number like this will be imprecise from the start.
(Oh and the original question mentioned comparisons to zero. Don't forget that -0.0 is also a perfectly valid floating-point value.)
[The 'right answer' glosses over selecting K. Selecting K ends up being just as ad-hoc as selecting VISIBLE_SHIFT but selecting K is less obvious because unlike VISIBLE_SHIFT it is not grounded on any display property. Thus pick your poison - select K or select VISIBLE_SHIFT. This answer advocates selecting VISIBLE_SHIFT and then demonstrates the difficulty in selecting K]
Precisely because of round errors, you should not use comparison of 'exact' values for logical operations. In your specific case of a position on a visual display, it can't possibly matter if the position is 0.0 or 0.0000000003 - the difference is invisible to the eye. So your logic should be something like:
#define VISIBLE_SHIFT 0.0001 // for example
if (fabs(theView.frame.origin.x) < VISIBLE_SHIFT) { /* ... */ }
However, in the end, 'invisible to the eye' will depend on your display properties. If you can upper bound the display (you should be able to); then choose VISIBLE_SHIFT to be a fraction of that upper bound.
Now, the 'right answer' rests upon K so let's explore picking K. The 'right answer' above says:
K is a constant you choose such that the accumulated error of your
computations is definitely bounded by K units in the last place (and
if you're not sure you got the error bound calculation right, make K a
few times bigger than what your calculations say it should be)
So we need K. If getting K is more difficult, less intuitive than selecting my VISIBLE_SHIFT then you'll decide what works for you. To find K we are going to write a test program that looks at a bunch of K values so we can see how it behaves. Ought to be obvious how to choose K, if the 'right answer' is usable. No?
We are going to use, as the 'right answer' details:
if (fabs(x-y) < K * DBL_EPSILON * fabs(x+y) || fabs(x-y) < DBL_MIN)
Let's just try all values of K:
#include <math.h>
#include <float.h>
#include <stdio.h>
void main (void)
{
double x = 1e-13;
double y = 0.0;
double K = 1e22;
int i = 0;
for (; i < 32; i++, K = K/10.0)
{
printf ("K:%40.16lf -> ", K);
if (fabs(x-y) < K * DBL_EPSILON * fabs(x+y) || fabs(x-y) < DBL_MIN)
printf ("YES\n");
else
printf ("NO\n");
}
}
ebg#ebg$ gcc -o test test.c
ebg#ebg$ ./test
K:10000000000000000000000.0000000000000000 -> YES
K: 1000000000000000000000.0000000000000000 -> YES
K: 100000000000000000000.0000000000000000 -> YES
K: 10000000000000000000.0000000000000000 -> YES
K: 1000000000000000000.0000000000000000 -> YES
K: 100000000000000000.0000000000000000 -> YES
K: 10000000000000000.0000000000000000 -> YES
K: 1000000000000000.0000000000000000 -> NO
K: 100000000000000.0000000000000000 -> NO
K: 10000000000000.0000000000000000 -> NO
K: 1000000000000.0000000000000000 -> NO
K: 100000000000.0000000000000000 -> NO
K: 10000000000.0000000000000000 -> NO
K: 1000000000.0000000000000000 -> NO
K: 100000000.0000000000000000 -> NO
K: 10000000.0000000000000000 -> NO
K: 1000000.0000000000000000 -> NO
K: 100000.0000000000000000 -> NO
K: 10000.0000000000000000 -> NO
K: 1000.0000000000000000 -> NO
K: 100.0000000000000000 -> NO
K: 10.0000000000000000 -> NO
K: 1.0000000000000000 -> NO
K: 0.1000000000000000 -> NO
K: 0.0100000000000000 -> NO
K: 0.0010000000000000 -> NO
K: 0.0001000000000000 -> NO
K: 0.0000100000000000 -> NO
K: 0.0000010000000000 -> NO
K: 0.0000001000000000 -> NO
K: 0.0000000100000000 -> NO
K: 0.0000000010000000 -> NO
Ah, so K should be 1e16 or larger if I want 1e-13 to be 'zero'.
So, I'd say you have two options:
Do a simple epsilon computation using your engineering judgement for the value of 'epsilon', as I've suggested. If you are doing graphics and 'zero' is meant to be a 'visible change' than examine your visual assets (images, etc) and judge what epsilon can be.
Don't attempt any floating point computations until you've read the non-cargo-cult answer's reference (and gotten your Ph.D in the process) and then use your non-intuitive judgement to select K.
The correct question: how does one compare points in Cocoa Touch?
The correct answer: CGPointEqualToPoint().
A different question: Are two calculated values are the same?
The answer posted here: They are not.
How to check if they are close? If you want to check if they are close, then don't use CGPointEqualToPoint(). But, don't check to see if they are close. Do something that makes sense in the real world, like checking to see if a point is beyond a line or if a point is inside a sphere.
The last time I checked the C standard, there was no requirement for floating point operations on doubles (64 bits total, 53 bit mantissa) to be accurate to more than that precision. However, some hardware might do the operations in registers of greater precision, and the requirement was interpreted to mean no requirement to clear lower order bits (beyond the precision of the numbers being loaded into the registers). So you could get unexpected results of comparisons like this depending on what was left over in the registers from whoever slept there last.
That said, and despite my efforts to expunge it whenever I see it, the outfit where I work has lots of C code that is compiled using gcc and run on linux, and we have not noticed any of these unexpected results in a very long time. I have no idea whether this is because gcc is clearing the low-order bits for us, the 80-bit registers are not used for these operations on modern computers, the standard has been changed, or what. I'd like to know if anyone can quote chapter and verse.
You can use such code for compare float with zero:
if ((int)(theView.frame.origin.x * 100) == 0) {
// do important operation
}
This will compare with 0.1 accuracy, that enough for CGFloat in this case.
Another issue that may need to be kept in mind is that different implementations do things differently. One example of this that I am very familiar with is the FP units on the Sony Playstation 2. They have significant discrepancies when compared to the IEEE FP hardware in any X86 device. The cited article mentions the complete lack of support for inf and NaN, and it gets worse.
Less well known is what I came to know as the "one bit multiply" error. For certain values of float x:
y = x * 1.0;
assert(y == x);
would fail the assert. In the general case, sometimes, but not always, the result of a FP multiply on the Playstation 2 had a mantissa that was a single bit less than the equivalent IEEE mantissa.
My point being that you should not assume that porting FP code from one platform to another will produce the same results. Any given platform is internally consistent, in that results don't change on that platform, it's just that they may not agree with a different platform. E.g. CPython on X86 uses 64 bit doubles to represent floats, while CircuitPython on a Cortex MO has to use software FP, and only uses 32 bit floats. Needless to say that will introduce discrepancies.
A quote I learned over 40 years ago is as true today as the day I learned it. "Doing floating point maths on a computer is like moving a pile of sand. Every time you do anything, you leave a little sand behind and pick up a little dirt."
Playstation is a registered trademark of Sony Corporation.
-(BOOL)isFloatEqual:(CGFloat)firstValue secondValue:(CGFloat)secondValue{
BOOL isEqual = NO;
NSNumber *firstValueNumber = [NSNumber numberWithDouble:firstValue];
NSNumber *secondValueNumber = [NSNumber numberWithDouble:secondValue];
isEqual = [firstValueNumber isEqualToNumber:secondValueNumber];
return isEqual;
}
I am using the following comparison function to compare a number of decimal places:
bool compare(const double value1, const double value2, const int precision)
{
int64_t magnitude = static_cast<int64_t>(std::pow(10, precision));
int64_t intValue1 = static_cast<int64_t>(value1 * magnitude);
int64_t intValue2 = static_cast<int64_t>(value2 * magnitude);
return intValue1 == intValue2;
}
// Compare 9 decimal places:
if (compare(theView.frame.origin.x, 0, 9)) {
// do important operation
}
I'd say the right thing is to declare each number as an object, and then define three things in that object: 1) an equality operator. 2) a setAcceptableDifference method. 3)the value itself. The equality operator returns true if the absolute difference of two values is less than the value set as acceptable.
You can subclass the object to suit the problem. For example, round bars of metal between 1 and 2 inches might be considered of equal diameter if their diameters differed by less than 0.0001 inches. So you'd call setAcceptableDifference with parameter 0.0001, and then use the equality operator with confidence.
Let's say we have these inequalities:
if (a*a+b*b>0) {
...
}
if (a*b+c*d>0) {
...
}
Obviously, both of them require 2 multiplications to evaluate.
The thing is, do we really need to calculate 2 full-precision products just to check whether these expressions are positive or not?
Is there any mathematical trickery that allows me to write those if commands without the need to evaluate 2 products?
Will it be faster?
Or perhaps the compiler takes care of making it as fast as possible?
Am I overthinking?
EDIT:
Well, that escalated quickly.
I just want to point out that I am speaking in general terms. I don't need such a micro-optimization in any project of mine anyway.
Also, yes, I could have omitted the first one for being too trivial. Possibly the second one is more interesting.
Your "am I overthinking" question suggests me that you haven't found this to be an actual bottleneck by really profiling your code. So I'd say yes, you're just trying to do premature optimization.
However, if this really is a major performance-critical part of your application, then the only improvement I can think of right now is the following. Since squares of real numbers can never be negative, then "a squared is greater than zero" is equivalent with "a is not zero". So if comparisons are fast (well, that's relative -- faster than multiplication) on your architecture, then
if (a*a+b*b>0) {
...
}
can be written as
if (a || b) {
...
}
(provided that no corner cases arise. If the variables are signed integers or floating-point numbers representing real numbers, then this should be fine. If, however, there are some unsigned integer overflow or complex numbers involved, then you will have to perform additional checks, and at that point, it's hard to reason about the relative performance without true profiling.)
I don't have such a "clever" "optimization" for the second case in my mind, but perhaps someone else can come up with something similar -- if and only if it is absolutely necessary. Not otherwise -- code readability is preferred over performance when performance is not critical.
I'm assuming none of these expressions will overflow either because the types don't have a concept of overflow or because the values are in range. When overflow and potential wrap-around enters the picture (e.g. if a and b are unsigned int) different rules apply.
The first statement is obviously equivalent to
if (a != 0 || b != 0)
or
if (a || b)
which trades an extra branch for two multiplications and an addition.
The second statement is a bit more interesting: I'd think it could be reasonable to determine the signs of the operand and only do the actual math when a*b and c*d have opposite signs. In all other cases the condition can be determined without knowing the actual values. Whether the resulting logic is faster than the computations will depend on the types, I'd guess.
The first one will always be >= 0. It will be 0 if and only if a and b are 0, so it's equivalent to:
if (a || b) {
...
}
About the second one: if sign of a is equal to sign of b, and sign of c is equal to sign of d, then it's the same situation as above:
if (sign(a)==sign(b) && sign(c)==sign(d))
{
if ((a && b) || (c && d))
{
... > 0
}
else
{
... = 0
}
}
else
{
if (sign(a)*sign(b)==sign(c)*sign(d))
{
... <= 0
}
else
{
/* must do the actual product to find out */
}
}
For a IEEE-754 compliant floating point number, the sign is at the MSb of each number.
For environments in which FP is emulated, there's one thing you can do to optimize a bit the comparison: you can avoid the additions, if you just compare the two results of the products, like this:
if (a*b>c*d) {
...
}
This is a bit faster because to compare two floating point numbers, you just compare them as if they were signed integer numbers, and a FP-less CPU surely will have resources to compare two integer numbers faster than the time it spends doing a FP software addition.
Another rewrite (assuming you are using floats, they are 32 bits wide and IEEE 754 compliant, and the same size as int; yes, this is hacky and platform-dependent).
For the first case, you can use a single bitwise 'or' & 'and' (the and is used to ignore the sign-bit and the exponent, retaining only the mantissa; you can remove it if there cannot be any -0s):
if (*((int *)&a) | (*(int *)&b) & 0x7FFFFF) { // a*a + b*b>0
...
}
I really doubt that there is any similarly branch-less magic for the second case.
I'm looking for an existing implementation for C or D, or advice in implementing, signed and/or unsigned integer types with floating point semantics.
That is to say, an integer type that behaves as floating point types do when doing arithmetic: Overflow produces infinity (-infinity for signed underflow) rather than wrapping around or having undefined behavior, undefined operations produce NaN, etc.
In essence, a version of floating point where the distribution of presentable numbers falls evenly on the number line, instead of conglomerating around 0.
In addition, all operations should be deterministic; any given two's complement 32-bit architecture should produce the exact same result for the same computation, regardless of its implementation (whereas floating point may, and often will, produce slightly differing results).
Finally, performance is a concern, which has me worried about potential "bignum" (arbitrary-precision) solutions.
See also: Fixed-point and saturation arithmetic.
I do not know of any existing implementations of this.
But I would imagine implementing it would be a matter of (in D):
enum CheckedIntState : ubyte
{
ok,
overflow,
underflow,
nan,
}
struct CheckedInt(T)
if (isIntegral!T)
{
private T _value;
private CheckedIntState _state;
// Constructors, getters, conversion helper methods, etc.
// And a bunch of operator overloads that check the
// result on every operation and yield a CheckedInt!T
// with an appropriate state.
// You'll also want to overload opEquals and opCmp and
// make them check the state of the operands so that
// NaNs compare equal and so on.
}
Saturating arithmetic does what you want except for the part where undefined operations produce NaN; this is going to turn out to be problematic, because most saturating implementations use the full number range, and so there are not values left over to reserve for NaN. Thus, you probably can't easily build this on the back of saturating hardware instructions unless you have an additional "is this value NaN" field, which is rather wasteful.
Assuming that you're wedded to the idea of NaN values, all of the edge case detection will probably need to happen in software. For most integer operations, this is pretty straightforward, especially if you have a wider type available (let's assume long long is strictly larger than whatever integer type underlies myType):
myType add(myType x, myType y) {
if (x == positiveInfinity && y == negativeInfinity ||
x == negativeInfinity && y == positiveInfinity)
return notANumber;
long long wideResult = x + y;
if (wideResult >= positiveInfinity) return positiveInfinity;
if (wideResult <= negativeInfinity) return negativeInfinity;
return (myType)wideResult;
}
One solution might be to implement multiple-precision arithmetic with abstract data types. The book C Interfaces and Implementations by David Hanson has a chapter (interface and implementation) of MP arithmetic.
Doing calculations using scaled integers is also a possibility. You might be able to use his arbitrary-precision arithmetic, although I believe this implementation can't overflow. You could run out of memory, but that's a different problem.
In either case, you might need to tweak the code to return exactly what you want on overflow and such.
Source code (MIT license)
That page also has a link to buy the book from amazon.com.
Half of the requirements are satisfied in saturating arithmetic, which are implemented in e.g. ARM instructions, MMX and SSE.
As also pointed out by Stephen Canon, one needs additional elements to check overflow / NaN. Some instruction sets (Atmel at least) btw have a sticking flag to test for overflows (could be used to differentiate inf from max_int). And perhaps "Q" + 0 could mark for NaN.