A small program for understanding unions in C [duplicate] - c

Suppose I define a union like this:
#include <stdio.h>
int main() {
union u {
int i;
float f;
};
union u tst;
tst.f = 23.45;
printf("%d\n", tst.i);
return 0;
}
Can somebody tell me what the memory where tst is stored will look like?
I am trying to understand the output 1102813594 that this program produces.

It depends on the implementation (compiler, OS, etc.) but you can use the debugger to actually see the memory contents if you want.
For example, in my MSVC 2008:
0x00415748 9a 99 bb 41
is the memory contents. Read from LSB on the left side (Intel, little-endian machine), this is 0x41bb999a or indeed 1102813594.
Generally, however, the integer and float are stored in the same bytes. Depending on how you access the union, you get the integer or floating point interpretation of those bytes. The size of the memory space, again, depends on the implementation, although it's usually the largest of its constituents aligned to some fixed boundary.
Why is the value such as it is in your (or mine) case? You should read about floating-point number representation for that (look up ieee 754)

The result is depends on the compiler implementation, But for most x86 compilers, float and int will be the same size. Wikipedia has a pretty good diagram of the layout of a 32 bit float http://en.wikipedia.org/wiki/Single_precision_floating-point_format, that can help to explain 1102813594.
If you print out the int as a hex value, it will be easier to figure out.
printf("%x\n", tst.i);

With a union, both variables are stored starting at the same memory location. A float is stored in an IEEE format (can't remember the standard number, you can look that up[edit: as pointed out by others, IEEE 754]). But, it will be a two's complement normalized (mantissa is always between 0 and 10, exponent can be anything) floating point number.
you are taking the first 4 bytes of that number (again, you can look up what bits go where in the 16 or 32 bits that a float takes up, can't remember). So it basically means nothing and it isn't useful as an int. That is, unless you know why you would want to do something like that, but usually, a float and int combo isn't very useful.
And, no, I don't think it is implementation defined. I believe that the standard dictates what format a float is in.

In union, members will be share the same memory. so that we can get the float value as integer value.
Floating number format will be different from integer storage. so that we can understand the difference using the union.
For Ex:
If I store the 12 integer value in ( 32 bits ). we can get this 12 value as floating point format.
It will stored as signed(1 bit), exponent(8 bits) and significant precision(23 bits).

I wrote a little program that shows what happens when you preserve the bit pattern of a 32-bit float into a 32-bit integer. It gives you the exact same output you are experiencing:
#include <iostream>
int main()
{
float f = 23.45;
int x = *reinterpret_cast<int*>(&f);
std::cout << x; // 1102813594
}

Related

Can someone explain what maxBit is?

I am trying to understand what is maxBit in the following and what it represents?
When I print min and max, I get numbers that make no sense to me.
Thank you.
#include <stdio.h>
#include <math.h>
int main() {
union {double a; size_t b;} u;
u.a = 12345;
size_t max = u.b;
u.a = 6;
size_t min = u.b;
int maxBit = floor(log(max-min) / log(2));
printf("%d",maxBit);
return 0;
}
This code appears to be using a horrible kludge. I am one of the more welcoming participants here regarding tolerating code that uses compiler extensions or other things beyond the C standard, but this code does simply unnecessary things for no apparent good purpose. It relies on size_t being 64 bits. It may be 64 bits in some specific C implementation this was written for, but that is not portable, and C implementations that use 64 bits are generally modern, and modern implementations ought to support the uint64_t of <stdint.h>, which would be an appropriate type for this. So better code would have used uint64_t.
Unless there is some quite surprising motivation for this and other issues in the code, it is low quality, bad code. Do not use it, and regard any code from the same source with skepticism.
That said, the code likely assumes the IEEE-754 binary64 is used for double, and max-min gives the difference between the representations of 12345 and 6. log(max-min) / log(2) finds the base-two-logarithm of max-min, and the integer portion of that will be the index of the highest bit that changed. For 12345, the exponent field is 1036. For 6, the exponent field is 1025. The difference is 11 (binary 1011), in which the first set bit is bit 3 of the exponent field. The field runs from bits 62 to 52 in the binary64 format, so bit 3 in the exponent field is bit 55 (52+3) in the whole 64 bits of the representation. So maxBit will be 55. However, there is no apparent significance to this. There is no great value in knowing that bit 55 is the highest bit set in the difference between the representations of 12345 and 6. I am familiar with a variety of IEEE-754 bit-twiddling hacks, and I do not recognize this. I expect nobody can tell you much more about this without context, such as where the code came from or how it is used.
From C17 document, 6.5.2.3 Structure and union members, footnote 97 :
If the member used to read the contents of a union object is not the
same as the member last used to store a value in the object, the
appropriate part of the object representation of the value is
reinterpreted as an object representation in the new type as described
in 6.2.6 (a process sometimes called “type punning”). This might be a
trap representation.
Therefore, when you store u.a = 12345 and then access size_t max = u.b, the bit patterns in the memory of u.a is reinterpreted as a size_t. Since, u.a is of double, it is represented in IEEE754 format.
The value stored in max and min are :
4668012349850910720 (0100000011001000000111001000000000000000000000000000000000000000-> IEEE754)
4618441417868443648 (0100000000011000000000000000000000000000000000000000000000000000-> IEEE754)
Then, max-min = 49570931982467072, then log(max-min)/log(2) = 55.460344, then floor(55.460344) = 55. This is reason for 55 as output.
PS: There are two types of IEEE754 format : Single precision (32) and Double precision (64). Please visit this website IEEE754 for more details.

getting exponent of a floating number in c

Sorry if this is already been asked, and I've seen other way of extracting the exponent of a floating point number, however this is what is given to me:
unsigned f2i(float f)
{
union {
unsigned i;
float f;
} x;
x.i = 0;
x.f = f;
return x.i;
}
I'm having trouble understanding this union datatype, because shouldn't the return x.i at the end always make f2i return a 0?
Also, what application could this data type even be useful for? For example, say I have a function:
int getexponent(float f){
}
This function is supposed to get the exponent value of the floating point number with bias of 127. I've found many ways to make this possible, however how could I manipulate the f2i function to serve this purpose?
I appreciate any pointers!
Update!!
Wow, years later and this just seem trivial.
For those who may be interested, here is the function!
int getexponent(float f) {
unsigned f2u(float f);
unsigned int ui = (f2u(f)>>23) & 0xff ;//shift over by 23 and compare to 0xff to get the exponent with the bias
int bias = 127;//initialized bias
if(ui == 0) return 1-bias; // special case 0
else if(ui == 255) return 11111111; //special case infinity
return ui - bias;
}
I'm having trouble understanding this union datatype
The union data type is a way for a programmer to indicate that some variable can be one of a number of different types. The wording of the C11 standard is something like "a union contains at most one of its members". It is used for things like parameters that may be logically one thing or another. For example, an IP address might be an IPv4 address or an IPv6 address so you might define an address type as follows:
struct IpAddress
{
bool isIPv6;
union
{
uint8_t v4[4];
uint8_t v6[16];
} bytes;
}
And you would use it like this:
struct IpAddress address = // Something
if (address.isIPv6)
{
doSomeV6ThingWith(address.bytes.v6);
}
else
{
doSomeV4ThingWith(address.bytes.v4);
}
Historically, unions have also been used to get the bits of one type into an object of another type. This is because, in a union, the members all start at the same memory address. If I just do this:
float f = 3.0;
int i = f;
The compiler will insert code to convert a float to an integer, so the exponent will be lost. However, in
union
{
unsigned int i;
float f;
} x;
x.f = 3.0;
int i = x.i;
i now contains the exact bits that represent 3.0 in a float. Or at least you hope it does. There's nothing in the C standard that says float and unsigned int have to be the same size. There's also nothing in the C standard that mandates a particular representation for float (well, annex F says floats conform to IEC 60559 , but I don't know if that counts as part of the standard). So the above code is, at best, non portable.
To get the exponent of a float the portable way is the frexpf() function defined in math.h
how could I manipulate the f2i function to serve this purpose?
Let's make the assumption that a float is stored in IEC 60559 format in 32 bits which Wkipedia thinks is the same as IEEE 754. Let's also assume that integers are stored in little endian format.
union
{
uint32_t i;
float f;
} x;
x.f = someFloat;
uint32_t bits = x.i;
bits now contains the bit pattern of the floating point number. A single precision floating point number looks like this
SEEEEEEEEMMMMMMMMMMMMMMMMMMMMMMM
^ ^ ^
bit 31 bit 22 bit 0
Where S is the sign bit, E is an exponent bit, M is a mantissa bit.
So having got your int32_t you just need to do some shifting and masking:
uint32_t exponentWithBias = (bits >> 23) & 0xff;
Because it's a union it means that x.i and x.f have the same address, what this allows you to do is reinterpret one data type to another. In this scenario the union is first zeroed out by x.i = 0; and then filled with f. Then x.i is returned which is the integer representation of the float f. If you would then shift that value you would get the exponent of the original f because of the way a float is laid out in memory.
I'm having trouble understanding this union datatype, because shouldn't the return x.i at the end always make f2i return a 0?
The line x.i = 0; is a bit paranoid and shouldn't be necessary. Given that unsigned int and float are both 32 bits, the union creates a single chunk of 32 bits in memory, which you can access either as a float or as the pure binary representation of that float, which is what the unsigned is for. (It would have been better to use uint32_t.)
This means that the lines x.i = 0; and x.f = f; write to the very same memory area twice.
What you end up with after the function is the pure binary notation of the float. Parsing out the exponent or any other part from there is very much implementation-defined, since it depends on floating point format and endianess. How to represent FLOAT number in memory in C might be helpful.
That union type is strongly discouraged, as it is strongly architecture dependant and compiler implementation dependant.... both things make it almost impossible to determine a correct way to achieve the information you request.
There are portable ways of doing that, and all of them have to deal with the calculation of logarithm to the base ten. If you get the integer part of the log10(x) you'll get the number you want,
int power10 = (int)log10(x);
double log10(double x)
{
return log(x)/log(10.0);
}
will give you the exponent of 10 to raise to get the number to multiply the mantissa to get the number.... if you divide the original number by the last result, you'll get the mantissa.
Be careful, as the floating point numbers are normally internally stored in a power of two's basis, which means the exponent you get stored is not a power of ten, but a power of two.

I don't know how to convert 16 byte hexadecimal to floating point

Probably from the time I am trying to convert and wandering internet solely for the answer of this question but I could not find. I just got I can convert hexadecimal to decimal either by some serious programming or manually through math.
I am looking to convert. If there is any way to do that then please share. Well I have searched and found IEEE754 which seems not to be working or I am not comprehending it. Can I do it manually through any equation, I think I heard about it? Or a neat C program which may do it.
Please help! Any help would be highly appreciated.
You need to study the IEEE floating point spec.
This would be quite straightforward in Java. You have handy methods like Float.floatToRawIntBits(float x) and Float.intBitsToFloat(int x)
You might be able to do it with a union.
In C its a bit more hacky. You can abuse a union. Unions in C reuse the same memory for two different variables. A union like
union DoubleLong {
long l;
double d;
} u;
would allow you to treat the same bit of memory as either a long u.i or a double u.f. There are both 8 byte so they take the same space. So doing u.d = M_PI; printf("%lx\n", u.l); prints the binary representation of pi 0x400921fb54442d18.
For 16 byte we need the union to have an array or two 8 byte longs.
#include <stdio.h>
union Data {
long i[2];
long double f;
} u;
int main(int argc, char const *argv[]) {
// Using random IP6 address 2602:306:cecd:7130:5421:a679:6d71:a660
// Store in two separate 8-byte longs
u.i[0] = 0x2602306cecd7130;
u.i[1] = 0x5421a6796d71a660;
// Print out in hexidecimal
printf("%.15La %lx %lx\n", u.f,u.i[0],u.i[1]);
// print out in decimal
printf("%.15Le %ld %ld\n", u.f,u.i[0],u.i[1]);
return 0;
}
One problem is 16 byte hexadecimal floating point numbers might not be defined on you system. float is typically 32 bit - 4 byte, double is 64 bit - 8 byte. There is an long double type but on my mac its only 80-bit - 10 byte. It might be simpler to convert to two double precision numbers. So on my system only the last 4 hexadecimal digits of the second number are significant.
Not all hexadecimal numbers correspond to valid floating point numbers, a lot of values will correspond to NaN's. If the higher bits are 7FFF or FFFF (or 7FF, FFF for double) that will either give infinity of NaN.

Binary int to double without typecasting

I am doing some microcontroller programming in C. I am reading from various sensors 4 bytes that represent either float, int or unsigned int. Currently, I am storing them in unsigned int format in the microcontroller even though the data may be float since to a microcontroller they are just bytes. This data is then transferred to PC. I want the PC to interpret the binary data as a float or an int or an unsigned int whenever I wish. Is there a way to do that?
Ex.
unsigned int value = 0x40040000; // This is 2.0625 in double
double result = convert_binary_to_double(value); // result = 2.0625
Thanks.
PS: I tried typecasting and that does not work.
Keeping in mind that what you're asking for isn't entirely portable, something like this will probably do the job:
float value = *(float *)&bits;
The other obvious possibility is to use a union:
typedef union {
unsigned int uint_val;
int int_val;
float float_val;
} values;
values v;
v.uint_val = 0x40040000;
float f = v.float_val;
Either will probably work fine, but neither guarantees portability.
The shortest way is to cast the address of the float (resp int) to the address of an int (resp float) and to dereference that: for instance, double result = * (float*) &value;. Your optimizing compiler may compile this code into something that does not work as you intended though (see strict aliasing rules).
A way that works more often is to use an union with an int field and a float field.
Why don't you do something like:
double *x = &value;
or a union?
It's a terrible job :)
this talks about their representation in memory (according to the IEEE754), so with various bitwise operations you have to extract the sign, the exponent and the mantissa from your's micro controller's output, then do number = (-1)^e * mantissa ^ (exponent - 1023).
What do you mean by saying "type casting does not work"?
What exactly did you try?
For example, did you try something like this:
double convert_binary_to_double(unsigned int value)
{
return *((double*)&value);
}
Have you tried using the itoa() function? It's a neat little function often used for converting int to ASCII.
To the PC their also just bytes, and as such could be copied into any 4 byte Int, 4 byte Unsigned Int or 4 byte Float field and the computer would be quite happy. You will need to envelope them, or somehow tag them as int, unsigned, or float. There is NO WAY the compiler can tell from looking at any 32 bit collection of bits what it's type is. - If you need a better explanation, comment me back and I'll give you the real long version - Joe
- Maybe I misunderstod you question. I thought you wanted to ship over 4 bytes of data, and have the compuuter magucly know if the data was origenally a 32 bit Int, 32 bit Unsigned or 32 bit Float. There is no way for the computer to know the answer without additional information.

How to get the upper-/lower machine-word of a double according to IEEE 754 (ansi-c)?

i want to use the sqrt implementation of fdlibm.
This implementation defines (according to the endianess) some macros for accessing the lower/upper 32-bit of a double) in the following way (here: only the little-endian-version):
#define __HI(x) *(1+(int*)&x)
#define __LO(x) *(int*)&x
#define __HIp(x) *(1+(int*)x)
#define __LOp(x) *(int*)x
The readme of flibm is saying the following (a little bit shortened)
Each double precision floating-point number must be in IEEE 754
double format, and that each number can be retrieved as two 32-bit
integers through the using of pointer bashing as in the example
below:
Example: let y = 2.0
double fp number y: 2.0
IEEE double format: 0x4000000000000000
Referencing y as two integers:
*(int*)&y,*(1+(int*)&y) = {0x40000000,0x0} (on sparc)
{0x0,0x40000000} (on 386)
Note: Four macros are defined in fdlibm.h to handle this kind of
retrieving:
__HI(x) the high part of a double x
(sign,exponent,the first 21 significant bits)
__LO(x) the least 32 significant bits of x
__HIp(x) same as __HI except that the argument is a pointer
to a double
__LOp(x) same as __LO except that the argument is a pointer
to a double
If the behavior of pointer bashing is undefined, one may hack on the
macro in fdlibm.h.
I want to use this implementation and these macros with the cbmc model checker, which should be conformable with ansi-c.
I don't know exactly whats wrong, but the following example shows that these macros aren't working (little-endian was chosen, 32-bit machine-word was chosen):
temp=24376533834232348.000000l (0100001101010101101001101001010100000100000000101101110010000111)
high=0 (00000000000000000000000000000000)
low=67296391 (00000100000000101101110010000111)
Both seem to be wrong. High seems to be empty for every value of temp.
Any new ideas for accessing the both 32-words with ansi-c?
UPDATE: Thanks for all your answers and comments. All of your proposals worked for me. For the moment i decided to use "R.."s version and marked this as favorite answer because it seems to be the most robust in my tool regarding endianness.
Why not use an union?
union {
double value;
struct {
int upper;
int lower;
} words;
} converter;
converter.value = 1.2345;
printf("%d",converter.words.upper);
(Note that the behaviour code is implementation-dependent and relies on internal representation and specific data sizes)
On top of that, if you make that struct contain bitfields, you can access the individual floating-point parts (sign, exponent and mantissa) separately:
union {
double value;
struct {
int upper;
int lower;
} words;
struct {
long long mantissa : 52; // not 2C!
int exponent : 11; // not 2C!
int sign : 1;
};
} converter;
Casting pointers like you're doing violates the aliasing rules of the C language (pointers of different types may be assumed by the compiler not to point to the same data, except in certain very restricted cases). A better approach might be:
#define REP(x) ((union { double v; uint64_t r; }){ x }).r
#define HI(x) (uint32_t)(REP(x) >> 32)
#define LO(x) (uint32_t)(REP(x))
Note that this also fixed the endian dependency (assuming the floating point and integer endianness are the same) and the illegal _-prefix on the macro names.
An even better way might be not breaking it into high/low portions at all, and using the uint64_t representation REP(x) directly.
From a standards perspective, this use of unions is a little bit suspect, but better than the pointer casts. Using a cast to unsigned char * and accessing the data byte-by-byte would be better in some ways, but worse in that you have to worry about endian considerations, and probably a lot slower..
I would suggest taking a look at the disassembly to see exactly why the existing "pointer-bashing" method does not work. In its absence, you might use something more traditional like a binary shift (if you're on a 64-bit system).

Resources