I'm trying to interface a board with a raspberry.
I have to read/write value to the board via modbus, but I can't write floating point value like the board.
I'm using C, and Eclipse debug perspective to see the variable's value directly.
The board send me 0x46C35000 which should value 25'000 Dec but eclipse shows me 1.18720512e+009...
When I try on this website http://www.binaryconvert.com/convert_float.html?hexadecimal=46C35000 I obtain 25,000.
What's the problem?
For testing purposes I'm using this:
int main(){
while(1){ // To view easily the value in the debug perspective
float test = 0x46C35000;
printf("%f\n",test);
}
return 0;
}
Thanks!
When you do this:
float test = 0x46C35000;
You're setting the value to 0x46C35000 (decimal 1187205120), not the representation.
You can do what you want as follows:
union {
uint32_t i;
float f;
} u = { 0x46C35000 };
printf("f=%f\n", u.f);
This safely allows an unsigned 32-bit value to be interpreted as a float.
You’re confusing logical value and internal representation. Your assignments sets the value, which is thereafter 0x46C35000, i.e. 1187205120.
To set the internal representation of the floating point number you need to make a few assumptions about how floating point numbers are represented in memory. The assumptions on the website you’re using (IEEE 754, 32 bit) are fair on a general purpose computer though.
To change the internal representation, use memcpy to copy the raw bytes into the float:
// Ensure our assumptions are correct:
#if !defined(__STDC_IEC_559__) && !defined(__GCC_IEC_559)
# error Floating points might not be in IEEE 754/IEC 559 format!
#endif
_Static_assert(sizeof(float) == sizeof(uint32_t), "Floats are not 32 bit numbers");
float f;
uint32_t rep = 0x46C35000;
memcpy(&f, &rep, sizeof f);
printf("%f\n", f);
Output: 25000.000000.
(This requires the header stdint.h for uint32_t, and string.h for memcpy.)
The constant 0x46C35000 being assigned to a float will implicitly convert the int value 1187205120 into a float, rather than directly overlay the bits into the IEEE-754 floating point format.
I normally use a union for this sort of thing:
#include <stdio.h>
typedef union
{
float f;
uint32_t i;
} FU;
int main()
{
FU foo;
foo.f = 25000.0;
printf("%.8X\n", foo.i);
foo.i = 0x46C35000;
printf("%f\n", foo.f);
return 0;
}
Output:
46C35000
25000.000000
You can understand how data are represented in memory when you access them through their address:
#include <stdio.h>
int main()
{
float f25000; // totally unused, has exactly same size as `int'
int i = 0x46C35000; // put binary value of 0x46C35000 into `int' (4 bytes representation of integer)
float *faddr; // pointer (address) to float
faddr = (float*)&i; // put address of `i' into `faddr' so `faddr' points to `i' in memory
printf("f=%f\n", *faddr); // print value pointed bu `faddr'
return 0;
}
and the result:
$ gcc -of25000 f25000.c; ./f25000
f=25000.000000
What it does is:
put 0x46C35000 into int i
copy address of i into faddr, which is also address that points data in memory, in this case of float type
print value pointed by faddr; treat it as float type
you get your 25000.0.
Related
I have to encode the electron charge, which is -1.602*10-19 C, using IEEE-754. I did it manually and verified my result using this site. So I know my representation is good. My problem is that, if I try to build a C program showing my number in scientific notation, I get the wrong number.
Here is my code:
#include <stdio.h>
int main(int argc, char const *argv[])
{
float q = 0xa03d217b;
printf("q = %e", q);
return 0;
}
Here is the result:
$ ./test.exe
q = 2.688361e+09
My question: Is there another representation that my CPU might be using internally for floating point other than IEEE-754?
The line float q = 0xa03d217b; converts the integer (hex) literal into a float value representing that number (or an approximation thereof); thus, the value assigned to your q will be the (decimal) value 2,688,360,827 (which is what 0xa03d217b equates to), as you have noted.
If you must initialize a float variable with its internal IEEE-754 (HEX) representation, then your best option is to use type punning via the members of a union (legal in C but not in C++):
#include <stdio.h>
typedef union {
float f;
unsigned int h;
} hexfloat;
int main()
{
hexfloat hf;
hf.h = 0xa03d217b;
float q = hf.f;
printf("%lg\n", q);
return 0;
}
There are also some 'quick tricks' using pointer casting, like:
unsigned iee = 0xa03d217b;
float q = *(float*)(&iee);
But, be aware, there are numerous issues with such approaches, like potential endianness conflicts and the fact that you're breaking strict aliasing requirements.
Hence, q doesn't not contains the value you expect. The hex value is converted to a float with the same value (with approximation), not with the same bit-representation.
When compiled with g++ and the option -Wall, there is a warning:
warning: implicit conversion from 'unsigned int' to 'float' changes value from 2688360827 to 2688360704 [-Wimplicit-const-int-float-conversion]
Can be tested on Compiler Explorer.
This warning is apparently not supported by gcc. Instead, you can use the option -Wfloat-conversion (with is not part of -Wall -Wextra):
warning: conversion from 'unsigned int' to 'float' changes value from '2688360827' to '2.6883607e+9f' [-Wfloat-conversion]
Again on Compiler Explorer.
My problem is that if I try to build a c program showing my the number in scientific notation.
What if your target machine might or might not use IEEE754 encoding? Copying the bit pattern may fail.
If starting with a binary32 constant 0xa03d217b, code could examine it and then build up the best float available for that implementation.
#include <math.h>
#define BINARY32_MASK_SIGN 0x80000000
#define BINARY32_MASK_EXPO 0x7FE00000
#define BINARY32_MASK_SNCD 0x007FFFFF
#define BINARY32_IMPLIED_BIT 0x800000
#define BINARY32_SHIFT_EXPO 23
float binary32_to_float(uint32_t x) {
// Break up into 3 parts
bool sign = x & BINARY32_MASK_SIGN;
int biased_expo = (x & BINARY32_MASK_EXPO) >> BINARY32_SHIFT_EXPO;
int32_t significand = x & BINARY32_MASK_SNCD;
float y;
if (biased_expo == 0xFF) {
y = significand ? NAN : INFINITY; // For simplicity, NaN payload not copied
} else {
int expo;
if (biased_expo > 0) {
significand |= BINARY32_IMPLIED_BIT;
expo = biased_expo - 127;
} else {
expo = 126;
}
y = ldexpf((float)significand, expo - BINARY32_SHIFT_EXPO);
}
if (sign) {
y = -y;
}
return y;
}
Sample usage and output
#include <float.h>
#include <stdio.h>
int main() {
float e = -1.602e-19;
printf("%.*e\n", FLT_DECIMAL_DIG, e);
uint32_t e_as_binary32 = 0xa03d217b;
printf("%.*e\n", FLT_DECIMAL_DIG, binary32_to_float(e_as_binary32));
}
-1.602000046e-19
-1.602000046e-19
Note that C supports hexadecimal-floating point numbers as literals. See https://en.cppreference.com/w/cpp/language/floating_literal for details. This notation is useful to write the number in a portable way, without any concern for rounding issues as would be the case if you write it in regular decimal/scientific notation. Here's the number you're interested in:
#include <stdio.h>
int main(void) {
float f = -0x1.7a42f6p-63;
printf("%e\n", f);
return 0;
};
When I run this program, I get:
$ make a
cc a.c -o a
$ ./a
-1.602000e-19
So long as your compiler supports this notation, you need not worry about how the underlying machine represents floats, so long as this particular number fits into its float representation.
I cannot figure out how to convert the value of a referenced float pointer when it is referenced from an integer casted into a float pointer. I'm sorry if I'm wording this incorrectly. Here is an example of what I mean:
#include <stdio.h>
main() {
int i;
float *f;
i = 1092616192;
f = (float *)&i;
printf("i is %d and f is %f\n", i, *f);
}
the output for f is 10. How did I get that result?
Normally, the value of 1092616192 in hexadecimal is 0x41200000.
In floating-point, that will give you:
sign = positive (0b)
exponent = 130, 2^3 (10000010b)
significand = 2097152, 1.25 (01000000000000000000000b)
2^3*1.25
= 8 *1.25
= 10
To explain the exponent part uses an offset encoding, so you have to subtract 127 from it to get the real value. 130 - 127 = 3. And since this is a binary encoding, we use 2 as the base. 2 ^ 3 = 8.
To explain the significand part, you start with an invisible 'whole' value of 1. the uppermost (leftmost) bit is half of that, 0.5. The next bit is half of 0.5, 0.25. Because only the 0.25 bit and the default '1' bit is set, the significand represents 1 + 0.25 = 1.25.
What you are trying to do is called type-punning. It should be done via a union, or using memcpy() and is only meaningful on an architecture where sizeof(int) == sizeof(float) without padding bits. The result is highly dependent on the architecture: byte ordering and floating point representation will affect the reinterpreted value. The presence of padding bits would invoke undefined behavior as the representation of float 15.0 could be a trap value for type int.
Here is how you get the number corresponding to 15.0:
#include <stdio.h>
int main(void) {
union {
float f;
int i;
unsigned int u;
} u;
u.f = 15;
printf("re-interpreting the bits of float %.1f as int gives %d (%#x in hex)\n",
u.f, u.i, u.u);
return 0;
}
output on an Intel PC:
re-interpreting the bits of float 15.0 as int gives 1097859072 (0x41700000 in hex)
You are trying to predict the consequence of an undefined activity - it depends on a lot of random things, and on the hardware and OS you are using.
Basically, what you are doing is throwing a glass against the wall and getting a certain shard. Now you are asking how to get a differently formed shard. well, you need to throw the glass differently against the wall...
I saw the following piece of code in an opensource AAC decoder,
static void flt_round(float32_t *pf)
{
int32_t flg;
uint32_t tmp, tmp1, tmp2;
tmp = *(uint32_t*)pf;
flg = tmp & (uint32_t)0x00008000;
tmp &= (uint32_t)0xffff0000;
tmp1 = tmp;
/* round 1/2 lsb toward infinity */
if (flg)
{
tmp &= (uint32_t)0xff800000; /* extract exponent and sign */
tmp |= (uint32_t)0x00010000; /* insert 1 lsb */
tmp2 = tmp; /* add 1 lsb and elided one */
tmp &= (uint32_t)0xff800000; /* extract exponent and sign */
*pf = *(float32_t*)&tmp1 + *(float32_t*)&tmp2 - *(float32_t*)&tmp;
} else {
*pf = *(float32_t*)&tmp;
}
}
In that the line,
*pf = *(float32_t*)&tmp;
is same as,
*pf = (float32_t)tmp;
Isn't it?
Or is there a difference? Maybe in performance?
Thank you.
No, they're completely different. Say the value of tmp is 1. Their code will give *pf the value of whatever floating point number has the same binary representation as the integer 1. Your code would give it the floating point value 1.0!
This code is editing the value of a float knowing it is formatted using the standard IEEE 754 floating representation.
*(float32_t*)&tmp;
means reinterpret the address of temp as being a pointer on a 32 bit float, extract the value pointed.
(float32_t)tmp;
means cast the integer to float 32. Which means 32.1111f may well produce 32.
Very different.
The first causes the bit pattern of tmp to be reinterpreted as a float.
The second causes the numerical value of tmp to be converted to float (within the accuracy that it can be represented including rounding).
Try this:
int main(void) {
int32_t n=1078530011;
float32_t f;
f=*(float32_t*)(&n);
printf("reinterpet the bit pattern of %d as float - f==%f\n",n,f);
f=(float32_t)n;
printf("cast the numerical value of %d as float - f==%f\n",n,f);
return 0;
}
Example output:
reinterpet the bit pattern of 1078530011 as float - f==3.141593
cast the numerical value of 1078530011 as float - f==1078530048.000000
It's like thinking that
const char* str="3568";
int a=*(int*)str;
int b=atoi(str);
Will assign a and b the same values.
First to answer the question, my_float = (float)my_int safely converts the integer to a float according to the rules of the standard (6.3.1.4).
When a value of integer type is converted to a real floating type, if
the value being converted can be represented exactly in the new type,
it is unchanged. If the value being converted is in the range of
values that can be represented but cannot be represented exactly, the
result is either the nearest higher or nearest lower representable
value, chosen in an implementation-defined manner. If the value being
converted is outside the range of values that can be represented, the
behavior is undefined.
my_float = *(float*)&my_int on the other hand, is a dirty trick, telling the program that the binary contents of the integer should be treated as if they were a float variable, with no concerns at all.
However, the person who wrote the dirty trick was probably not aware of it leading to undefined behavior for another reason: it violates the strict aliasing rule.
To fix this bug, you either have to tell your compiler to behave in a non-standard, non-portable manner (for example gcc -fno-strict-aliasing), which I don't recommend.
Or preferably, you rewrite the code so that it doesn't rely on undefined behavior. Best way is to use unions, for which strict aliasing doesn't apply, in the following manner:
typedef union
{
uint32_t as_int;
float32_t as_float;
} converter_t;
uint32_t value1, value2, value3; // do something with these variables
*pf = (converter_t){value1}.as_float +
(converter_t){value2}.as_float -
(converter_t){value3}.as_float;
Also it is good practice to add the following sanity check:
static_assert(sizeof(converter_t) == sizeof(uint32_t),
"Unexpected padding or wrong type sizes!");
I was trying out few examples on do's and dont's of typecasting. I could not understand why the following code snippets failed to output the correct result.
/* int to float */
#include<stdio.h>
int main(){
int i = 37;
float f = *(float*)&i;
printf("\n %f \n",f);
return 0;
}
This prints 0.000000
/* float to short */
#include<stdio.h>
int main(){
float f = 7.0;
short s = *(float*)&f;
printf("\n s: %d \n",s);
return 0;
}
This prints 7
/* From double to char */
#include<stdio.h>
int main(){
double d = 3.14;
char ch = *(char*)&d;
printf("\n ch : %c \n",ch);
return 0;
}
This prints garbage
/* From short to double */
#include<stdio.h>
int main(){
short s = 45;
double d = *(double*)&s;
printf("\n d : %f \n",d);
return 0;
}
This prints 0.000000
Why does the cast from float to int give the correct result and all the other conversions give wrong results when type is cast explicitly?
I couldn't clearly understand why this typecasting of (float*) is needed instead of float
int i = 10;
float f = (float) i; // gives the correct op as : 10.000
But,
int i = 10;
float f = *(float*)&i; // gives a 0.0000
What is the difference between the above two type casts?
Why cant we use:
float f = (float**)&i;
float f = *(float*)&i;
In this example:
char ch = *(char*)&d;
You are not casting from double to a char. You are casting from a double* to a char*; that is, you are casting from a double pointer to a char pointer.
C will convert floating point types to integer types when casting the values, but since you are casting pointers to those values instead, there is no conversion done. You get garbage because floating point numbers are stored very differently from fixed point numbers.
Read about the representation of floating point numbers in systems. Its not the way you're expecting it to be. Cast made through (float *) in your first snippet read the most significant first 16 bits. And if your system is little endian, there will be always zeros in most significant bits if the value containing in the int type variable is lesser than 2^16.
If you need to convert int to float, the conversion is straight, because the promotion rules of C.
So, it is enough to write:
int i = 37;
float f = i;
This gives the result f == 37.0.
However, int the cast (float *)(&i), the result is an object of type "pointer to float".
In this case, the address of "pointer to integer" &i is the same as of the the "pointer to float" (float *)(&i). However, the object pointed by this last object is a float whose bits are the same as of the object i, which is an integer.
Now, the main point in this discussion is that the bit-representation of objects in memory is very different for integers and for floats.
A positive integer is represented in explicit form, as its binary mathematical expression dictates.
However, the floating point numbers have other representation, consisting of mantissa and exponent.
So, the bits of an object, when interpreted as an integer, have one meaning, but the same bits, interpreted as a float, have another very different meaning.
The better question is, why does it EVER work. You see, when you do
typedef int T;//replace with whatever
typedef double J;//replace with whatever
T s = 45;
J d = *(J*)(&s);
You are basically telling the compiler (get the T* address of s, reintepret what it points to as J, and then get that value). No casting of the value (changing the bytes) actually happens. Sometimes, by luck, this is the same (low value floats will have an exponential of 0, so the integer interpretation may be the same) but often times, this'll be garbage, or worse, if the sizes are not the same (like casting to double from char) you can read unallocated data (heap corruption (sometimes)!).
I'm using a function (Borrowing code from: http://www.exploringbinary.com/converting-floating-point-numbers-to-binary-strings-in-c/) to convert a float into binary; stored in a char. I need to be able to perform bitwise operations on the result though, so I've been trying to find a way to take the string and convert it to an integer so that I can shift the bits around as needed. I've tried atoi() but that seems to return -1.
Thus far, I have:
char binStringRaw[FP2BIN_STRING_MAX];
float myfloat;
printf("Enter a floating point number: ");
scanf("%f", &myfloat);
int castedFloat = (*((int*)&myfloat));
fp2bin(castedFloat, binStringRaw);
Where the input is "12.125", the output of binStringRaw is "10000010100001000000000000000000". However, attempting to perform a bitwise operation on this give an error: "Invalid operands to binary expression ('char[1077]' and 'int')".
P.S. - I apologize if this is a simple question or if there are some general problems with my code. I'm very new to C programming coming from Python.
"castedFloat already is the binary representation of the float, as the cast-operation tells it to interpret the bits of myfloat as bits of an integer instead of a float. "
EDIT: Thanks to Eric Postpischil:
Eric Postpischil in Comments:
"the above is not guaranteed by the C standard. Dereferencing a
converted pointer is not fully specified by the standard. A proper way
to do this is to use a union: int x = (union { float f; int i; }) {
myfloat } .i;. (And one must still ensure that int and float are the
same size in the C implementation being used.)"
Bitwise operations are only defined for Integer-type values, such as char, int, long, ..., thats why it fails when using them on the string (char-array)
btw,
int atoi(char*)
returns the integer-value of a number written inside that string, eg.
atoi("12")
will return an integer with value 12
If you would want to convert the binary representation stored in a string, you have to set the integer bit by bit corresponding to the chars, a function to do this could look like that:
long intFromBinString(char* str){
long ret=0; //initialize returnvalue with zero
int i=0; //stores the current position in string
while(str[i] != 0){ //in c, strings are NULL-terminated, so end of string is 0
ret<<1; //another bit in string, so binary shift resutl-value
if(str[i] == '1') //if the new bit was 1, add that by binary or at the end
ret || 0x01;
i++; //increment position in string
}
return ret; //return result
}
The function fp2bin needs to get a double as parameter. if you call it with castedFloat, the (now interpreted as an integer)value will be implicitly casted to float, and then pass it on.
I assume you want to get a binary representation of the float, play some bitwise ops on it, and then pass it on.
In order to do that you have to cast it back to float, the reverse way you did before, so
int castedFloat = (*((int*)&myfloat));
{/*** some bitwise magic ***/}
float backcastedFloat = (*(float*)&castedFloat);
fp2bin(castedFloat, binStringRaw);
EDIT:(Thanks again, Eric):
union bothType { float f; int i; }) both;
both.f = myfloat;
{/*** some bitwise magic on both.i ***/}
fp2bin(both.f, binStringRaw);
should work