This question already has answers here:
Floating point numbers do not work as expected
(6 answers)
Closed 8 years ago.
In the following code
#include<stdio.h>
int main()
{
union myUnion
{
int intVar;
char charVar;
float floatVar;
};
union myUnion localVar;
localVar.intVar = 10;
localVar.charVar = 'A';
localVar.floatVar = 20.2;
printf("%d ", localVar.intVar);
printf("%c ", localVar.charVar);
printf("%f ", localVar.floatVar);
}
I understand that union can hold only one value at a time. So when I assign char value, int would be overwritten, n then when I assign floatValue, char would be overwritten. So I was expecting some garbage values for int and char variables and 20.200000 for float variable as it was last value to be assigned. But following is the output I'm getting on VS Express as well as gcc
1101109658 Ü 20.200001
unable to understand why float value is changed ?
This has nothing to do with union, and the float value was not changed.
It simply doesn't have enough bits to represent 20.2 exactly as a binary float. But that's okay, nobody has that many bits.
You should read What Every Computer Scientist Should Know About Floating-Point Arithmetic.
Related
This question already has answers here:
What does a colon in a struct declaration mean, such as :1, :7, :16, or :32?
(3 answers)
Closed 2 years ago.
While going through some C code, I encountered statements like
char var1 : num1, char var2: num2;
From the context, it seems like the number i.e. num1 is the byte size.
I am unable to find any explanation.
This could be part of what is called a bit-field in the C programming language.
Bit-fields can only be declared inside a struct, e.g.
struct {
unsigned int flag : 1; /* A one bit flag */
unsigned int value : 5; /* A 5 bit value */
} option;
if (option.flag == 1)
option.value = 7;
About everything on bit-fields is implementation-defined. The intention is to have bit-fields arranged as compact as possible by the compiler. E.g. the above could well fit in one byte.
This question already has answers here:
for (unsigned char i = 0; i<=0xff; i++) produces infinite loop
(4 answers)
Closed 5 years ago.
#include <stdio.h>
int main() {
unsigned char var = -100;
for(var = 0; var <= 255; var++){
printf("%d ", var);
}
}
the output is attached below (run on codeblocks IDE version 16.01)
why is the output an infinite loop?
This condition var <= 255 is always true for an unsigned char, assuming CHAR_BIT is 8 on your platform. So the loop is infinite since the increment will cause the variable to wrap (that's what unsigned arithmetic does in C).
This initialization:
unsigned char var = -100;
is not an error, it's simply annoying code. The compiler will convert the int -100 to unsigned char, following the rules in the language specification.
You are using an unsigned char and its possible range is 0-255.
You are running your loop from 0-255 (inclusive). The moment your variable goes to 256, it will be converted back to 0. Also, initial value -100 will be treated as +156, due to this possible range.
So, this leads to an infinite loop.
Because unsigned char overflow problem. So, remove = in for loop condition.
for(var=0;var<255;var++){
}
For more information, See this stack overflow question.
unsigned char
Range is 0 to 255.When var =255.When it is incremented we get value as 256 which cannot be stored in unsigned char.That is the reason why it is ending in infinite loop.And when you initialize var as -100.It will not show any error because it converts -100 to binary and takes the 1st eight bits.And the corresponding value will be the value of var
I was trying out few examples on do's and dont's of typecasting. I could not understand why the following code snippets failed to output the correct result.
/* int to float */
#include<stdio.h>
int main(){
int i = 37;
float f = *(float*)&i;
printf("\n %f \n",f);
return 0;
}
This prints 0.000000
/* float to short */
#include<stdio.h>
int main(){
float f = 7.0;
short s = *(float*)&f;
printf("\n s: %d \n",s);
return 0;
}
This prints 7
/* From double to char */
#include<stdio.h>
int main(){
double d = 3.14;
char ch = *(char*)&d;
printf("\n ch : %c \n",ch);
return 0;
}
This prints garbage
/* From short to double */
#include<stdio.h>
int main(){
short s = 45;
double d = *(double*)&s;
printf("\n d : %f \n",d);
return 0;
}
This prints 0.000000
Why does the cast from float to int give the correct result and all the other conversions give wrong results when type is cast explicitly?
I couldn't clearly understand why this typecasting of (float*) is needed instead of float
int i = 10;
float f = (float) i; // gives the correct op as : 10.000
But,
int i = 10;
float f = *(float*)&i; // gives a 0.0000
What is the difference between the above two type casts?
Why cant we use:
float f = (float**)&i;
float f = *(float*)&i;
In this example:
char ch = *(char*)&d;
You are not casting from double to a char. You are casting from a double* to a char*; that is, you are casting from a double pointer to a char pointer.
C will convert floating point types to integer types when casting the values, but since you are casting pointers to those values instead, there is no conversion done. You get garbage because floating point numbers are stored very differently from fixed point numbers.
Read about the representation of floating point numbers in systems. Its not the way you're expecting it to be. Cast made through (float *) in your first snippet read the most significant first 16 bits. And if your system is little endian, there will be always zeros in most significant bits if the value containing in the int type variable is lesser than 2^16.
If you need to convert int to float, the conversion is straight, because the promotion rules of C.
So, it is enough to write:
int i = 37;
float f = i;
This gives the result f == 37.0.
However, int the cast (float *)(&i), the result is an object of type "pointer to float".
In this case, the address of "pointer to integer" &i is the same as of the the "pointer to float" (float *)(&i). However, the object pointed by this last object is a float whose bits are the same as of the object i, which is an integer.
Now, the main point in this discussion is that the bit-representation of objects in memory is very different for integers and for floats.
A positive integer is represented in explicit form, as its binary mathematical expression dictates.
However, the floating point numbers have other representation, consisting of mantissa and exponent.
So, the bits of an object, when interpreted as an integer, have one meaning, but the same bits, interpreted as a float, have another very different meaning.
The better question is, why does it EVER work. You see, when you do
typedef int T;//replace with whatever
typedef double J;//replace with whatever
T s = 45;
J d = *(J*)(&s);
You are basically telling the compiler (get the T* address of s, reintepret what it points to as J, and then get that value). No casting of the value (changing the bytes) actually happens. Sometimes, by luck, this is the same (low value floats will have an exponential of 0, so the integer interpretation may be the same) but often times, this'll be garbage, or worse, if the sizes are not the same (like casting to double from char) you can read unallocated data (heap corruption (sometimes)!).
I'm using a function (Borrowing code from: http://www.exploringbinary.com/converting-floating-point-numbers-to-binary-strings-in-c/) to convert a float into binary; stored in a char. I need to be able to perform bitwise operations on the result though, so I've been trying to find a way to take the string and convert it to an integer so that I can shift the bits around as needed. I've tried atoi() but that seems to return -1.
Thus far, I have:
char binStringRaw[FP2BIN_STRING_MAX];
float myfloat;
printf("Enter a floating point number: ");
scanf("%f", &myfloat);
int castedFloat = (*((int*)&myfloat));
fp2bin(castedFloat, binStringRaw);
Where the input is "12.125", the output of binStringRaw is "10000010100001000000000000000000". However, attempting to perform a bitwise operation on this give an error: "Invalid operands to binary expression ('char[1077]' and 'int')".
P.S. - I apologize if this is a simple question or if there are some general problems with my code. I'm very new to C programming coming from Python.
"castedFloat already is the binary representation of the float, as the cast-operation tells it to interpret the bits of myfloat as bits of an integer instead of a float. "
EDIT: Thanks to Eric Postpischil:
Eric Postpischil in Comments:
"the above is not guaranteed by the C standard. Dereferencing a
converted pointer is not fully specified by the standard. A proper way
to do this is to use a union: int x = (union { float f; int i; }) {
myfloat } .i;. (And one must still ensure that int and float are the
same size in the C implementation being used.)"
Bitwise operations are only defined for Integer-type values, such as char, int, long, ..., thats why it fails when using them on the string (char-array)
btw,
int atoi(char*)
returns the integer-value of a number written inside that string, eg.
atoi("12")
will return an integer with value 12
If you would want to convert the binary representation stored in a string, you have to set the integer bit by bit corresponding to the chars, a function to do this could look like that:
long intFromBinString(char* str){
long ret=0; //initialize returnvalue with zero
int i=0; //stores the current position in string
while(str[i] != 0){ //in c, strings are NULL-terminated, so end of string is 0
ret<<1; //another bit in string, so binary shift resutl-value
if(str[i] == '1') //if the new bit was 1, add that by binary or at the end
ret || 0x01;
i++; //increment position in string
}
return ret; //return result
}
The function fp2bin needs to get a double as parameter. if you call it with castedFloat, the (now interpreted as an integer)value will be implicitly casted to float, and then pass it on.
I assume you want to get a binary representation of the float, play some bitwise ops on it, and then pass it on.
In order to do that you have to cast it back to float, the reverse way you did before, so
int castedFloat = (*((int*)&myfloat));
{/*** some bitwise magic ***/}
float backcastedFloat = (*(float*)&castedFloat);
fp2bin(castedFloat, binStringRaw);
EDIT:(Thanks again, Eric):
union bothType { float f; int i; }) both;
both.f = myfloat;
{/*** some bitwise magic on both.i ***/}
fp2bin(both.f, binStringRaw);
should work
This question already has answers here:
Closed 12 years ago.
Possible Duplicate:
John Carmack’s Unusual Fast Inverse Square Root (Quake III)
I came across this piece of code a blog recently - it is from the Quake3 Engine. It is meant to calculate the inverse square root fast using the Newton-Rhapson method.
float InvSqrt (float x){
float xhalf = 0.5f*x;
int i = *(int*)&x;
i = 0x5f3759df - (i>>1);
x = *(float*)&i;
x = x*(1.5f - xhalf*x*x);
return x;
}
What is the reason for doing int i = *(int*)&x;? Doing int i = (int) x; instead gives a completely different result.
int i = *(int*)&x; doesn't convert x to an int -- what it does is get the actual bits of the float x, which is usually represented as a whole other 4-byte value than you'd expect.
For reference, doing this is a really bad idea unless you know exactly how float values are represented in memory.
int i = *(int*)&x; says "take the four bytes which make up the float value x, and treat them as if they were an int." float values and int value are stored using completely different methods (e.g. int 4 and float 4.0 have completely different bit patterns)
The number that ends up in i is the binary value of the IEEE floating point representation of the number in x. The link explains what that looks like. This is not a common C idiom, it's a clever trick from before the SSE instructions got added to commercially available x86 processors.