float FastInvSqrt(float x) {
float xhalf = 0.5f * x;
int i = *(int*)&x; // evil bit-level floating-point hacking
i = 0x5f3759df - (i >> 1); // what the...?
x = *(float*)&i;
x = x*(1.5f-(xhalf*x*x));
return x;
}
There are many places to read about this on the internet, but they all skip over the line:
int i = *(int*)&x;
Could somebody please explain it to me?
It means: take the address of the variable x (whatever type it may be), cast that address to an int pointer then dereference that to get an int from that address.
Technically, I think it's undefined behaviour but it works fine in many implementations. I suspect the author didn't really care that much, based on the non-readability of the code :-) They could at least have documented the method, even if with only a URL.
The cast after the integer manipulation line (the one you've so elegantly described with WTF) is similar as it goes from int back to float.
Related
If I have a variable as float var1 = 157.1; and want to transform it in int I would do int var2 = (int)var1;
I want to know about the other types of data, such as long int, short int, unsigned short int, long double and so on.
I tried long int var2 = (long int)var1; and it seemed to work, but I'm not sure if it is syntactically correct. If it is, I assume it'd be the same for all the other types, i.e., just the data type and its attributes separated by a space. If it isn't I'd like to know if there's a list of them of some sort.
This is the C cast operator, but the operation is more generally "type casting", "casting", or "recasting". This is a directive to the compiler to request a specific conversion.
When casting any valid C type can be specified, so:
int x = 10;
unsigned long long y = (unsigned long long) x;
In many cases this conversion can be done implicitly, automatically, so it's not always necessary but in others you must force it. For example:
int x = 10;
float y = x; // Valid, int -> float happens automatically.
You can get caught by surprise though:
int x = 10;
float y = x / 3; // y = 3.0, not 3.333, since it does integer division before casting
Where you need to cast to get the right result:
int x = 10;
float y = (float) x / 3; // 3.33333...
Note that when using pointers this is a whole different game:
int x = 10;
int* px = &x;
float* y = (float*) px; // Invalid conversion, treats int as a float
Generally C trusts you to know what you're doing, so you can easily shoot yourself in the foot. What "compiles" is syntactically valid by definition, but executing properly without crashing is a whole other concern. Anything not specified by the C "rule book" (C standard) is termed undefined behaviour, so you'll need to be aware of when you're breaking the rules, like in that last example.
Sometimes breaking the rules is necessary, like the Fast Inverse Square Root which relies on the ability of C to arbitrarily recast values.
This question already has answers here:
John Carmack's Unusual Fast Inverse Square Root (Quake III)
(6 answers)
Closed 5 years ago.
I saw following code here.
float Q_rsqrt( float number )
{
long i;
float x2, y;
const float threehalfs = 1.5F;
x2 = number * 0.5F;
y = number;
i = * ( long * ) &y; // evil floating point bit level hacking
i = 0x5f3759df - ( i >> 1 ); // what the heck?
y = * ( float * ) &i;
y = y * ( threehalfs - ( x2 * y * y ) ); // 1st iteration
// y = y * ( threehalfs - ( x2 * y * y ) ); // 2nd iteration, this can be removed
return y;
}
I don't understand following line.
i = * ( long * ) &y;
Generally, we use * and & with pointer, but here both used with variable. So, what does it do here?
The line is taking a float, looking at the memory holding that float, reinterpreting that memory as memory holding a long, and getting that long. Basically, it's reinterpreting the bit-pattern of a floating point number as that of an integer, in order to mess around with its bits.
Unfortunately, that code is also wrong. You are not allowed to dereference that casted pointer, for reasons described here. In C, the one-and-only way of reinterpreting a bit pattern is through memcpy. (Depending on C variant and implementation, going through a union may be acceptable as well.)
First, a disclaimer: This is technically undefined behavior because it violates the strict aliasing rule, but most compilers will do the below, and I don’t know what the standards situation was when this was first written.
When you look at the expression, there are four main parts:(
y is the float variable we want to convert. Simple enough.
& is the usual address-of operator, so &y is a pointer to y.
(long *) is a cast to a pointer to long, so (long *) &y is a pointer-to-long pointing to the same location in memory as y is at. There is no real long there, just a float, but if both float and long are 32 bits (like the code assumes), this will give you a pointer to a long with the same bit pattern as a float.
Finally, * dereferences the pointer. Thus, the full expression, * ( long * ) &y;, gives you a long with same bit pattern as y.
Usually, a long with the same bit pattern as a float would be useless, because they store numbers in completely different ways. However, it’s easier to do bit manipulation to a long, and the program later converts it back to a `float.
It means the address of y (making it a pointer) cast to a long pointer, dereferenced and assigned to i.
Here's the code:
#include <stdio.h>
union
{
unsigned u;
double d;
} a,b;
int main(void)
{
printf("Enter a, b:");
scanf("%lf %lf",&a.d,&b.d);
if(a.d>b.d)
{
a.u^=b.u^=a.u^=b.u;
}
printf("a=%g, b=%g\n",a.d,b.d);
return 0;
}
The a.u^=b.u^=a.u^=b.u; statement should have swapped a and b if a>b, but it seems that whatever I enter, the output will always be exactly my input.
a.u^=b.u^=a.u^=b.u; causes undefined behaviour by writing to a.u twice without a sequence point. See here for discussion of this code.
You could write:
unsigned tmp;
tmp = a.u;
a.u = b.u;
b.u = tmp;
which will swap a.u and b.u. However this may not achieve the goal of swapping the two doubles, if double is a larger type than unsigned on your system (a common scenario).
It's likely that double is 64 bits, while unsigned is only 32 bits. When you swap the unsigned members of the unions, you're only getting half of the doubles.
If you change d to float, or change u to unsigned long long, it will probably work, since they're likely to be the same size.
You're also causing UB by writing to the variables twice without a sequence point. The proper way to write the XOR swap is with multiple statements.
b.u ^= a.u;
a.u ^= b.u;
b.u ^= a.u;
For more about why not to use XOR for swapping, see Why don't people use xor swaps?
In usual environment, memory size of datatype 'unsigned' and 'double' are different.
That is why variables are not look like changed.
And you cannot using XOR swap on floating point variable.
because they are represented totally different in memory.
This question already has answers here:
John Carmack's Unusual Fast Inverse Square Root (Quake III)
(6 answers)
Closed 8 years ago.
I found a very complex function this is an implementation of Fast inverse square root. I honestly do not understand how this function works but the following conversion between a long and a float has caught my eye:
i = *(long *) &y;
And I leave the full code
inline float Q_rsqrt(float number)
{
long i;
float x2, y;
const float threehalfs = 1.5F;
x2 = number * 0.5F;
y = number;
i = *(long *) &y;
i = 0x5f3759df - (i >> 1);
y = * (float *) &i;
y = y * (threehalfs - (x2 * y * y));
return y;
}
The cast simply reinterprets the bits of y as a long so that it can perform integer arithmetic on them.
See Wikipedia for an explanation of the algorithm: Fast inverse square root.
The code makes use of the knowledge that, on the target platform, sizeof(long) == sizeof(float).
#R.. also helpfully adds the following in a comment:
It's also invalid C -- it's an aliasing violation. A correct version of this program needs use either memcpy or possibly (this is less clear that it's correct, but real compilers document support for it) union-based type punning. The version in OP's code will definitely be "miscompiled" (i.e. in a way different than the author's intent) by real compilers though.
This means that the code is not only architecture-specific, it is also compiler-specific.
I've written an iPhone-App and encountered a problem concerning typecasting between float and int. I've rewritten the code in C and the results always were the same, no matter if having it compiled under OS X (Console and Xcode), Linux (Console) or Windows (Visual Studio):
// Calculation of a page index depending on the amount of pages, its x-offset on the view
// and the total width of the view
#include <stdio.h>
int main()
{
int result = 0;
int pagesCnt = 23;
float offsetX = 2142.0f;
float width = 7038.0f;
offsetX = offsetX / width;
offsetX = (float)pagesCnt * offsetX;
result = (int)offsetX;
printf("%f (%d)\n", offsetX, result);
// The console should show "7.000000 (7)" now
// But actually I read "7.000000 (6)"
return 0;
}
Of course, we have a loss of precision here. When doing the math with the calculator x results in 7.00000000000008 and not in just 7.000000.
Nevertheless, as much as I understand C, this shouldn't be a problem as C is meant to truncate the the after-point-positions of a floating point number when converting it to an integer. In this case this is what I want actually.
When using double instead of float, the result would be as expected, but I don't need double precision here and it shouldn't make any difference.
Am I getting something wrong here? Are there situations when I should avoid conversion to int by using (int)? Why is C decrementing result anyway? Am I supposed to always use double?
Thanks a lot!
Edit: I've misformulated something: Of course, it makes a difference when using double instead of float. I meant, that it shouldn't make a difference to (int) 7.000000f or 7.00000000000008.
Edit 2: Thanks to you I understand my mistake now. Using printf() without defining the floating point precision was wrong. (int)ing 6.99... to 6 is correct, of course. So, I'll use double in the future when encountering problems like this.
When I augment the precision for the output, the result that I receive from your program is 6.99999952316284179688 (6), so 6 seems to be correct.
If you change printf("%f (%d)\n", offsetX, result); to printf("%.8f (%d)\n", offsetX, result); you will see the problem. After making that change, the output is:
6.99999952 (6)
You can correct this by rounding instead of casting to int. That is, change result = (int)offsetX; to result = roundf(offsetX);. Don't forget to #include <math.h>. Now, the output is:
6.99999952 (7)
The compiler is not free to change the order of floating-point operations in many cases, since finite precision arithmetic is not, in general, associative. i.e.,
(23 * 2142) / 7038 = 7.00000000000000000000
(2142 / 7038) * 23 = 6.99999999999999999979
These aren't single precision results, but it shows that double precision will not address your problem either.