Related
I'm trying to interface a board with a raspberry.
I have to read/write value to the board via modbus, but I can't write floating point value like the board.
I'm using C, and Eclipse debug perspective to see the variable's value directly.
The board send me 0x46C35000 which should value 25'000 Dec but eclipse shows me 1.18720512e+009...
When I try on this website http://www.binaryconvert.com/convert_float.html?hexadecimal=46C35000 I obtain 25,000.
What's the problem?
For testing purposes I'm using this:
int main(){
while(1){ // To view easily the value in the debug perspective
float test = 0x46C35000;
printf("%f\n",test);
}
return 0;
}
Thanks!
When you do this:
float test = 0x46C35000;
You're setting the value to 0x46C35000 (decimal 1187205120), not the representation.
You can do what you want as follows:
union {
uint32_t i;
float f;
} u = { 0x46C35000 };
printf("f=%f\n", u.f);
This safely allows an unsigned 32-bit value to be interpreted as a float.
You’re confusing logical value and internal representation. Your assignments sets the value, which is thereafter 0x46C35000, i.e. 1187205120.
To set the internal representation of the floating point number you need to make a few assumptions about how floating point numbers are represented in memory. The assumptions on the website you’re using (IEEE 754, 32 bit) are fair on a general purpose computer though.
To change the internal representation, use memcpy to copy the raw bytes into the float:
// Ensure our assumptions are correct:
#if !defined(__STDC_IEC_559__) && !defined(__GCC_IEC_559)
# error Floating points might not be in IEEE 754/IEC 559 format!
#endif
_Static_assert(sizeof(float) == sizeof(uint32_t), "Floats are not 32 bit numbers");
float f;
uint32_t rep = 0x46C35000;
memcpy(&f, &rep, sizeof f);
printf("%f\n", f);
Output: 25000.000000.
(This requires the header stdint.h for uint32_t, and string.h for memcpy.)
The constant 0x46C35000 being assigned to a float will implicitly convert the int value 1187205120 into a float, rather than directly overlay the bits into the IEEE-754 floating point format.
I normally use a union for this sort of thing:
#include <stdio.h>
typedef union
{
float f;
uint32_t i;
} FU;
int main()
{
FU foo;
foo.f = 25000.0;
printf("%.8X\n", foo.i);
foo.i = 0x46C35000;
printf("%f\n", foo.f);
return 0;
}
Output:
46C35000
25000.000000
You can understand how data are represented in memory when you access them through their address:
#include <stdio.h>
int main()
{
float f25000; // totally unused, has exactly same size as `int'
int i = 0x46C35000; // put binary value of 0x46C35000 into `int' (4 bytes representation of integer)
float *faddr; // pointer (address) to float
faddr = (float*)&i; // put address of `i' into `faddr' so `faddr' points to `i' in memory
printf("f=%f\n", *faddr); // print value pointed bu `faddr'
return 0;
}
and the result:
$ gcc -of25000 f25000.c; ./f25000
f=25000.000000
What it does is:
put 0x46C35000 into int i
copy address of i into faddr, which is also address that points data in memory, in this case of float type
print value pointed by faddr; treat it as float type
you get your 25000.0.
#include<stdio.h>
int main(void) {
int m=2, a=5, b=4;
float c=3.0, d=4.0;
printf("%.2f,%.2f\n", (a/b)*m, (a/d)*m);
printf("%.2f,%.2f\n", (a/d)*m, (a/b)*m);
return 0;
}
The result is:
2.50,0.00
2.50,-5487459522906928958771870404376799406808566324353377030104786519743796498661129086808599726405487030183023928761546165866809436788166721199470577627133198744209879004896284033606071946689658593354711574682628407789000148729336462084532657713450945423953627239707603534923756075420253339949731915621203968.00
I want to know what cause this difference.
However, if I change int to float, the answer is the same as I expect.
The result is:
2.50,2.50
2.50,2.50
You are using wrong format specifiers, try this:
#include<stdio.h>
int main(void)
{
int m=2,a=5,b=4;
float fm=2,fa=5,fb=4;
float c=3.0,d=4.0;
//First expression in this printf is int and second is float due to d
printf("%d , %.2f\n\n",(a/b)*m,(a/d)*m);
//Second expression in this printf is int and first is float due to d
printf("%.2f , %d\n\n",(a/d)*m,(a/b)*m);
printf("%.2f , %.2f\n\n",(fa/b)*fm,(fa/d)*fm);
printf("%.2f , %.2f\n\n",(fa/d)*fm,(fa/b)*fm);
return 0;
}
Output:
2 , 0
0 , 1074003968
2.50 , 2.50
2.50 , 2.50
Section 7.19.6.1 p9 of the C99 standard says:
If any argument is not the correct type for the corresponding conversion specification, the behavior is undefined.
Note that a/b is int if both are ints and is float if at least one is float, and similarly for other arithmetic operators. Thus in a/b, if they both are ints then 5/4 = 1; if at least one is float, then 5/4.0 = 5.0/4.0 = 1.25, because the compiler automatically converts an int into a float before any arithmetics with another float. So your results were expected to be different.
But in your case you seem to use the %.2f format even when you output ints. So print takes the four bytes that have your int and tries to decode those four bytes as if they had some float encoded. Flot numbers are encoded very differently in memory from ints -- it's like taking a Hungarian text and tryint to interpret it as if it was written in English, even if the letters are the same -- the resulting "interpretation" will be just garbage.
What you need to do: any int should be output with %d and any float with %f or similar formats.
If you want the result to be in float cast them:
#include <stdio.h>
int main(void) {
// your code goes here
int m=2,a=5,b=4;
float c=3.0,d=4.0;
printf("%.2f,%.2f\n",(float)(a/b)*m,(float)(a/d)*m);
printf("%.2f,%.2f\n", (float) (a/d)*m,(float)((a/b)*m));
return 0;
}
Hope this helps..:)
You are trying to print integer values
printf("%d,%.2f\n",(a/b)*m,(a/d)*m);
printf("%.2f , %d\n\n",(a/d)*m,(a/b)*m);
In order to print integer values you should use %d using wrong format specifier lead to undefined behavior
I was trying out few examples on do's and dont's of typecasting. I could not understand why the following code snippets failed to output the correct result.
/* int to float */
#include<stdio.h>
int main(){
int i = 37;
float f = *(float*)&i;
printf("\n %f \n",f);
return 0;
}
This prints 0.000000
/* float to short */
#include<stdio.h>
int main(){
float f = 7.0;
short s = *(float*)&f;
printf("\n s: %d \n",s);
return 0;
}
This prints 7
/* From double to char */
#include<stdio.h>
int main(){
double d = 3.14;
char ch = *(char*)&d;
printf("\n ch : %c \n",ch);
return 0;
}
This prints garbage
/* From short to double */
#include<stdio.h>
int main(){
short s = 45;
double d = *(double*)&s;
printf("\n d : %f \n",d);
return 0;
}
This prints 0.000000
Why does the cast from float to int give the correct result and all the other conversions give wrong results when type is cast explicitly?
I couldn't clearly understand why this typecasting of (float*) is needed instead of float
int i = 10;
float f = (float) i; // gives the correct op as : 10.000
But,
int i = 10;
float f = *(float*)&i; // gives a 0.0000
What is the difference between the above two type casts?
Why cant we use:
float f = (float**)&i;
float f = *(float*)&i;
In this example:
char ch = *(char*)&d;
You are not casting from double to a char. You are casting from a double* to a char*; that is, you are casting from a double pointer to a char pointer.
C will convert floating point types to integer types when casting the values, but since you are casting pointers to those values instead, there is no conversion done. You get garbage because floating point numbers are stored very differently from fixed point numbers.
Read about the representation of floating point numbers in systems. Its not the way you're expecting it to be. Cast made through (float *) in your first snippet read the most significant first 16 bits. And if your system is little endian, there will be always zeros in most significant bits if the value containing in the int type variable is lesser than 2^16.
If you need to convert int to float, the conversion is straight, because the promotion rules of C.
So, it is enough to write:
int i = 37;
float f = i;
This gives the result f == 37.0.
However, int the cast (float *)(&i), the result is an object of type "pointer to float".
In this case, the address of "pointer to integer" &i is the same as of the the "pointer to float" (float *)(&i). However, the object pointed by this last object is a float whose bits are the same as of the object i, which is an integer.
Now, the main point in this discussion is that the bit-representation of objects in memory is very different for integers and for floats.
A positive integer is represented in explicit form, as its binary mathematical expression dictates.
However, the floating point numbers have other representation, consisting of mantissa and exponent.
So, the bits of an object, when interpreted as an integer, have one meaning, but the same bits, interpreted as a float, have another very different meaning.
The better question is, why does it EVER work. You see, when you do
typedef int T;//replace with whatever
typedef double J;//replace with whatever
T s = 45;
J d = *(J*)(&s);
You are basically telling the compiler (get the T* address of s, reintepret what it points to as J, and then get that value). No casting of the value (changing the bytes) actually happens. Sometimes, by luck, this is the same (low value floats will have an exponential of 0, so the integer interpretation may be the same) but often times, this'll be garbage, or worse, if the sizes are not the same (like casting to double from char) you can read unallocated data (heap corruption (sometimes)!).
The output of the following c program is: 0.000000
Is there a logic behind the output or is the answer compiler dependent or I am just getting a garbage value?
#include<stdio.h>
int main()
{
int x=10;
printf("%f", x);
return 0;
}
PS:- I know that to try to print an integer value using %f is stupid. I am just asking this from a theoretical point of view.
From the latest C11 draft — §7.16 Variable arguments <stdarg.h>:
§7.16.1.1/2
...if type is not compatible with the type of the actual next argument
(as promoted according to the default argument promotions), the behavior
is undefined, except for the following cases:
— one type is a signed integer type, the other type is the corresponding
unsigned integer type, and the value is representable in both types;
— one type is pointer to void and the other is a pointer to a character type.
The most important thing to remember is that, as chris points out, the behavior is undefined. If this were in a real program, the only sensible thing to do would be to fix the code.
On the other hand, looking at the behavior of code whose behavior is not defined by the language standard can be instructive (as long as you're careful not to generalize the behavior too much).
printf's "%f" format expects an argument of type double, and prints it in decimal form with no exponent. Very small values will be printed as 0.000000.
When you do this:
int x=10;
printf("%f", x);
we can explain the visible behavior given a few assumptions about the platform you're on:
int is 4 bytes
double is 8 bytes
int and double arguments are passed to printf using the same mechanism, probably on the stack
So the call will (plausibly) push the int value 10 onto the stack as a 4-byte quantity, and printf will grab 8 bytes of data off the stack and treat it as the representation of a double. 4 bytes will be the representation of 10 (in hex, 0x0000000a); the other 4 bytes will be garbage, quite likely zero. The garbage could be either the high-order or low-order 4 bytes of the 8-byte quantity. (Or anything else; remember that the behavior is undefined.)
Here's a demo program I just threw together. Rather than abusing printf, it copies the representation of an int object into a double object using memcpy().
#include <stdio.h>
#include <string.h>
void print_hex(char *name, void *addr, size_t size) {
unsigned char *buf = addr;
printf("%s = ", name);
for (int i = 0; i < size; i ++) {
printf("%02x", buf[i]);
}
putchar('\n');
}
int main(void) {
int i = 10;
double x = 0.0;
print_hex("i (set to 10)", &i, sizeof i);
print_hex("x (set to 0.0)", &x, sizeof x);
memcpy(&x, &i, sizeof (int));
print_hex("x (copied from i)", &x, sizeof x);
printf("x (%%f format) = %f\n", x);
printf("x (%%g format) = %g\n", x);
return 0;
}
The output on my x86 system is:
i (set to 10) = 0a000000
x (set to 0.0) = 0000000000000000
x (copied from i) = 0a00000000000000
x (%f format) = 0.000000
x (%g format) = 4.94066e-323
As you can see, the value of the double is very small (you can consult a reference on the IEEE floating-point format for the details), close enough to zero that "%f" prints it as 0.000000.
Let me emphasize once again that the behavior is undefined, which means specifically that the language standard "imposes no requirements" on the program's behavior. Variations in byte order, in floating-point representation, and in argument-passing conventions can dramatically change the results. Even compiler optimization can affect it; compilers are permitted to assume that a program's behavior is well defined, and to perform transformations based on that assumption.
So please feel free to ignore everything I've written here (other than the first and last paragraphs).
Because an integer 10 in binary looks like this:
00000000 00000000 00000000 00001010
All printf does is take the in-memory representation and try to present it as an IEEE 754 floating point number.
There are three parts to a floating point number (from MSB to LSB):
The sign: 1 bit
The exponent: 8 bits
The mantissa: 23 bits
Since an integer 10 is just 1010 in the mantissa bits, its a very tiny number that is much less than the default precision of printf's floating point format.
The result is not defined.
I am just asking this from a theoretical point of view.
The complete chris's excellent answer:
What happens in your printf is undefined, but it could be quite similar to the code below (it depends on the actual implementation of the varargs, IIRC).
Disclaimer: The following is more "as-if-it-worked-that-way" explanation of what could happen in an undefined behaviour case on one platform than a true/valid description that always happens on all platforms.
Define "undefined" ?
Imagine the following code:
int main()
{
int i = 10 ;
void * pi = &i ;
double * pf = (double *) pi ; /* oranges are apples ! */
double f = *pf ;
/* what is the value inside f ? */
return 0;
}
Here, as your pointer to double (i.e. pf) points to an address hosting an integer value (i.e. i), what you'll get is undefined, and most probably garbage.
I want to see what's inside that memory !
If you really want to see what's possibly behind that garbage (when debugging on some platforms), try the following code where we will use an union to simulate a piece of memory where we will write either double or int data:
typedef union
{
char c[8] ; /* char is expected to be 1-byte wide */
double f ; /* double is expected to be 8-bytes wide */
int i ; /* int is expected to be 4-byte wide */
} MyUnion ;
The f and i field are used to set the value, and the c field is used to look at (or modify) the memory, byte by byte.
void printMyUnion(MyUnion * p)
{
printf("[%i %i %i %i %i %i %i %i]\n"
, p->c[0], p->c[1], p->c[2], p->c[3], p->c[4], p->c[5], p->c[6], p->c[7]) ;
}
the function above will print the memory layout, byte by byte.
The function below will prinf the memory layout of different types of values:
int main()
{
/* this will zero all the fields in the union */
memset(myUnion.c, 0, 8 * sizeof(char)) ;
printMyUnion(&myUnion) ; /* this should print only zeroes */
/* eg. [0 0 0 0 0 0 0 0] */
memset(myUnion.c, 0, 8 * sizeof(char)) ;
myUnion.i = 10 ;
printMyUnion(&myUnion) ; /* the representation of the int 10 in the union */
/* eg. [10 0 0 0 0 0 0 0] */
memset(myUnion.c, 0, 8 * sizeof(char)) ;
myUnion.f = 10 ;
printMyUnion(&myUnion) ; /* the representation of the double 10 in the union */
/* eg. [0 0 0 0 0 0 36 64] */
memset(myUnion.c, 0, 8 * sizeof(char)) ;
myUnion.f = 3.1415 ;
printMyUnion(&myUnion) ; /* the representation of the double 3.1415 in the union */
/* eg. [111 18 -125 -64 -54 33 9 64] */
return 0 ;
}
Note: This code was tested on Visual C++ 2010.
It doesn't mean it will work that way (or at all) on your platform, but usually, you should get results similar to what happens above.
In the end, the garbage is just the hexadecimal data set in the memory your looking at, but seen as some type.
As most types have different memory representation of the data, looking at the data in any other type than the original type is bound to have garbage (or not-so-garbage) results.
Your printf could well behave like that, and thus, try to interpret a raw piece of memory as a double when it was initially set as an int.
P.S.: Note that as the int and the double have different size in bytes, the garbage gets even more complicated, but it is mostly what I described above.
But I want to print an int as a double!
Seriously?
Helios proposed a solution.
int main()
{
int x=10;
printf("%f",(double)(x));
return 0;
}
Let's look at the pseudo code to see what's being fed to the printf:
/* printf("...", [[10 0 0 0]]) ; */
printf("%i",x);
/* printf("...", [[10 0 0 0 ?? ?? ?? ??]]) ; */
printf("%f",x);
/* printf("...", [[0 0 0 0 0 0 36 64]]) ; */
printf("%f",(double)(x));
The casts offers a different memory layout, effectively changing the integer "10" data into a double "10.0" data.
Thus, when using "%i", it will expect something like [[?? ?? ?? ??]], and for the first printf, receive [[10 0 0 0]] and interpret it correctly as an integer.
When using "%f", it will expect something like [[?? ?? ?? ?? ?? ?? ?? ??]], and receive on the second printf something like [[10 0 0 0]], missing 4 bytes. So the 4 last bytes will be random data (probably the bytes "after" the [[10 0 0 0]], that is, something like [[10 0 0 0 ?? ?? ?? ??]]
In the last printf, the cast changed the type, and thus the memory representation into [[0 0 0 0 0 0 36 64]] and the printf will interpret it correctly as a double.
essentially it's garbage. Small integers look like unnormalized floating point numbers which shouldn't exist.
You could cast the int variable like this:
int i = 3;
printf("%f",(float)(i));
I have a question rather than a problem (witch maybe arises a memory question).. I've written this simple program:
#include <stdio.h>
#include <stdlib.h>
int multi(int x, int y);
int main(){
int x;
int y;
printf("Enter the first number x: \n");
scanf("%d",&x);
printf("Enter the second number y: \n");
scanf("%d",&y);
int z=multi(x,y);
printf("The Result of the multiplication is : %d\n",z,"\n");
printf("The Memory adresse of x is : %d\n",&x);
printf("The Memory adresse of y is : %d\n",&y);
printf("The Memory adresse of z is : %d\n",&z);
getchar();
return 0;
}
int multi(int x,int y){
int c=x*y;
printf("The Memory adresse of c is : %d\n",&c);
return c;
}
As you can see (if you develop in C), this program inputs 2 int variables, then multiplies them with the multi function:
after getting the result , it displays the location of each variable in the memory (c,x,y and z).
I've tested this simple example those are the results (in my case):
The Memory adresse of c is : 2293556
The Result of the multiplication is : 12
The Memory adresse of x is : 2293620
The Memory adresse of y is : 2293616
The Memory adresse of z is : 2293612
as you can see , the three variables x,y,z that are declared in the main function have closed memory adresses (22936xx) , the variable c that's declared in the multi function has a different adress (22935xx).
looking at the x,y and z variables, we can see that there's a difference of 4 bytes between each two variables (i.e : &x-&y=4, &y-&z=4).
my question is , why does the difference between every two variable equals 4?
x, y, and z, are integer variables that will be created on the call stack (but see below). The sizeof int is 4 bytes, so that is how much space a compiler will allocate on the stack for those variables. These variables are adjacent to one another, so they are 4 bytes apart.
You can read about how memory is allocated for local variables by looking for information on calling conventions.
In some cases (where you do not use the address-of operator), the compiler may optimize local variables into registers.
In your situation the three variables were allocated in contiguous memory blocks. On x86 systems, int types are 32-bits wide, i.e. sizeof(int) == 4. So each variable is placed 4 bytes apart from the last.
The size of a machine word on your machine is 4 bytes so, for speed of access by your program they offset each variable on a 4 byte boundary.
Local variables are allocated on the "stack". Often the compiler will put them in sequential order since there's really no reason not to. An integer in C is 4 bytes. Therefore, it makes sense that y comes in 4 bytes after x, and z comes in 4 bytes after y.
It appears that you are running on a 32-bit machine. The size of each int is 32 bits, and with 8 bits in a byte, the size of an int is 4 bytes. Each memory address corresponds to one byte, so there is a difference of 4 between the address of each local variable.