C printing char array as float - c

I'm trying to print a char array of 4 elements as a float number. The compiler(gcc) won't allow me to write z.s={'3','4','j','k'}; in the main() function, why?
#include <stdio.h>
union n{
char s[4];
float x;
};
typedef union n N;
int main(void)
{
N z;
z.s[0]='3';
z.s[1]='4';
z.s[2]='j';
z.s[3]='k';
printf("f=%f\n",z.x);
return 0;
}
The output of the program above is: f=283135145630880207619489792.000000 , a number that is much larger than a float variable can store; the output should be, in scientific notation, 4.1977085E-8.
So what's wrong?

z.s={'3','4','j','k'}; would assign one array to another. C doesn't permit that, though you could declare the second and memcpy to the first.
The largest finite value that a single-precision IEEE float can store is 3.4028234 × 10^38, so 283135145630880207619489792.000000, which is approximately 2.8313514 × 10^26 is most definitely in range.
Assuming your chars are otherwise correct, the knee-jerk guess would be that you've got your endianness wrong.
EDIT:
34jk if taken from left to right, as on a big-endian machine is:
0x33 0x34 0x6a 0x6b
= 0011 0011, 0011 0100, 0110 1010, 0110 1011
So:
sign = 0
exponent = 011 0011 0 = 102 (dec), or -25 allowing for offset encoding
mantissa = [1] 011 0100 0110 1010 0110 1011 = 11823723 / (2^23)
So the value would be about 4.2 × 10^-8, which is what you want.
In little endian:
0x6b 0x6a 0x34 0x33
= 0110 1011, 0110 1010, 0011 0100, 0011 0011
sign = 0
exponent = 110 1011 0 = 214 (dec) => 87
mantissa = [1]110 1010 0011 0100 0011 0011 = 15348787 / (2^23)
So the value would be about 2.8 * 10^26, which is what your program is outputting. It's a safe conclusion you're on a little endian machine.
Summary then: byte order is different between machines. You want to use your bytes the other way around — try kj43.

What you actually see is {'k' 'j' '4' '3'}

Related

difficulty understanding signed not

I'm having trouble understanding why c equals -61 on the following program:
main() {
unsigned int a = 60; // 60 = 0011 1100
unsigned int b = 13; // 13 = 0000 1101
int c = 0;
c = ~a; //-61 = 1100 0011
printf("Line 4 - Value of c is %d\n", c );
}
I do understand how the NOT operator works on 0011 1100 (the solution being 1100 0011). But I'm not sure why the decimal number is increased by 1. Is this some sort of type conversion from unsigned int (from a) into signed int (from c) ?
Conversion from a positive to a negative number in twos complement (the standard signed format) constitutes a bitwise inversion, and adding one.
Note that for simplicity I am using a single signed byte.
So if 60 = 0011 1100
Then c = 1100 0011 + 1
= 1100 0100
And for a signed byte, the most significant bit is negative,
so
c = -128 + 64 + 4 = -60
You need to add 1 to account for the fact that the most significant bit is -128, while the largest positive number is 0111 1111 = 127. All negative numbers have a 1 for -128 which needs to be offset.
This is easy to see when you look at converting 0 to -0. Invert 00000000 and you get 11111111 and adding one gets you back to 00000000. Do the same with 1 to -1 and you get 11111111 - the largest possible negative number.

generating numbers using a n-bit number (similar to generating subsets of n-bit value)

Given a number 'n' and the corresponding binary value. I want to generate all combinations of n, using only the bits set in 'n'.
for example: if n=11 and its binary representation 1011, the combinations are:
0000
0001
0010
0011
1000
1001
1010
1011
example 2: if n=49 and its binary representation is 11001, the combinations are:
00000
00001
01000
01001
10000
10001
11000
11001
The easiest way could be to write a C subroutine to generate these combinations, however, i need some efficient way/algorithm for generating these combinations (some bit manipulation techniques similar to bit twiddling hacks).
Thanks.
Here's an illustration of a technique using simple bit twiddling. It uses the guaranteed semantics of binary arithmetic on unsigned values.
In the expression i = n & (i - n) below, all the sub-expressions, i, n, (i - n) and n & (i - n) are the same type, unsigned int and are unaffected by the integer promotion rules. Mathematically, the sub-expression (i - n) is evaluated modulo 2m, where 2m - 1 is the maximum value that can be represented by an unsigned int.
#include <stdio.h>
int main(void)
{
unsigned int n = 49;
unsigned int i;
for (i = 0; ; i = n & (i - n)) {
printf("%u", i);
if (i == n)
break;
putchar(' ');
}
putchar('\n');
return 0;
}
Example step
Assuming n is 49, or 0000000000110001₂, and the current value of i is 16, or 0000000000010000₂. Then for 16-bit, 2's complement arithmetic, we have:
0000000000010000₂
0000000000110001₂ -
----------------
1111111111011111₂
0000000000110001₂ &
----------------
0000000000010001₂ (= 17)
This is vaguely similar to the well-known technique to find the lowest '1' bit in an unsigned value x as x & -x, which works because ANDing a number with its two's complement leaves only the lowest '1' bit set in the result.
Get the binary string for your number.
Create a tree node labeled Z.
Let node := Z.
Let N := 1.
Read symbol N of your string (from the left, 1-indexed)
If symbol is 1, add two children of node; label the first child with node's label + 1, the other with node's label + 0. If symbol is 0, add one child of node; label the child with node's label + 0.
Increment the value of N.
Repeat steps 5 - 7 for all children of node, recursively, until in step 7 you increment the value of N to a value larger than the length of your original binary string. That condition terminates the recursion.
Once the tree has been so constructed, the leaves of the tree are labeled with your allowed values. Any tree traversal mechanism which visits the leaves will allow you to recover the allowed binary strings.
Iterating integers through only certain allowed bits? How about this function:
int NextMasked (int val, int mask) {
return ((val | ~mask) + 1) & mask;
}
Function step by step:
take previous value val and set all the bits to 1, which are NOT set in the mask
increment resulting value by 1
now; clear all bits what we set earlier, only allowing same bits what are set in the mask
tada.wav; we have new value and we going to return it to caller.
Let's test this with your example (mask = 11 and initially val = 0):
Here all values are in binary:
val = 0000, mask = 1011, ~mask = 0100 (bits inverted)
(old) (new)
val |~mask +1 &mask
0000 0100 0101 0001
0001 0101 0110 0010
0010 0110 0111 0011
0011 0111 1000 1000
1000 1100 1101 1001
1001 1101 1110 1010
1010 1110 1111 1011
1011 1111 10000 00000 <-- zeroed, you know it ended
Because this was so fun, let's test your another example (mask = 49):
Here all values are in binary:
val = 00000, mask = 11001, ~mask = 00110 (bits inverted)
(old) (new)
val |~mask +1 &mask
00000 00110 00111 00001
00001 00111 01000 01000
01000 01110 01111 01001
01001 01111 10000 10000
10000 10110 10111 10001
10001 10111 11000 11000
11000 11110 11111 11001
11001 11111 100000 000000 <-- again, wrapped around (zeroed)
It's late and I did not actually tested on the computer. But this should work, or give the idea...

addition of single precision negative floats in c [duplicate]

How do I subtract IEEE 754 numbers?
For example: 0,546875 - 32.875...
-> 0,546875 is 0 01111110 10001100000000000000000 in IEEE-754
-> -32.875 is 1 10000111 01000101111000000000000 in IEEE-754
So how do I do the subtraction? I know I have to to make both exponents equal but what do I do after that? 2'Complement of -32.875 mantissa and add with 0.546875 mantissa?
Really not any different than you do it with pencil and paper. Okay a little different
123400 - 5432 = 1.234*10^5 - 5.432*10^3
the bigger number dominates, shift the smaller number's mantissa off into the bit bucket until the exponents match
1.234*10^5 - 0.05432*10^5
then perform the subtraction with the mantissas
1.234 - 0.05432 = 1.17968
1.17968 * 10^5
Then normalize (which in this case it is)
That was with base 10 numbers.
In IEEE float, single precision
123400 = 0x1E208 = 0b11110001000001000
11110001000001000.000...
normalize that we have to shift the decimal place 16 places to the left so
1.1110001000001000 * 2^16
The exponent is biased so we add 127 to 16 and get 143 = 0x8F. It is a positive number so the sign bit is a 0 we start to build the IEEE floating point number the leading
1 before the decimal is implied and not used in single precision, we get rid of it and keep the fraction
sign bit, exponent, mantissa
0 10001111 1110001000001000...
0100011111110001000001000...
0100 0111 1111 0001 0000 0100 0...
0x47F10400
And if you write a program to see what a computer things 123400 is you get the same thing:
0x47F10400 123400.000000
So we know the exponent and mantissa for the first operand'
Now the second operand
5432 = 0x1538 = 0b0001010100111000
Normalize, shift decimal 12 bits left
1010100111000.000
1.010100111000000 * 2^12
The exponent is biased add 127 and get 139 = 0x8B = 0b10001011
Put it all together
0 10001011 010100111000000
010001011010100111000000
0100 0101 1010 1001 1100 0000...
0x45A9C00
And a computer program/compiler gives the same
0x45A9C000 5432.000000
Now to answer your question. Using the component parts of the floating point numbers, I have restored the implied 1 here because we need it
0 10001111 111100010000010000000000 - 0 10001011 101010011100000000000000
We have to line up our decimal places just like in grade school before we can subtract so in this context you have to shift the smaller exponent number right, tossing mantissa bits off the end until the exponents match
0 10001111 111100010000010000000000 - 0 10001011 101010011100000000000000
0 10001111 111100010000010000000000 - 0 10001100 010101001110000000000000
0 10001111 111100010000010000000000 - 0 10001101 001010100111000000000000
0 10001111 111100010000010000000000 - 0 10001110 000101010011100000000000
0 10001111 111100010000010000000000 - 0 10001111 000010101001110000000000
Now we can subtract the mantissas. If the sign bits match then we are going to actually subtract if they dont match then we add. They match this will be a subtraction.
computers perform a subtraction by using addition logic, inverting the second operator on the way into the adder and asserting the carry in bit, like this:
1
111100010000010000000000
+ 111101010110001111111111
==========================
And now just like with paper and pencil lets perform the add
1111000100000111111111111
111100010000010000000000
+ 111101010110001111111111
==========================
111001100110100000000000
or do it with hex on your calculator
111100010000010000000000 = 1111 0001 0000 0100 0000 0000 = 0xF10400
111101010110001111111111 = 1111 0101 0110 0011 1111 1111 = 0xF563FF
0xF10400 + 0xF563FF + 1 = 0x1E66800
1111001100110100000000000 =1 1110 0110 0110 1000 0000 0000 = 0x1E66800
A little bit about how the hardware works, since this was really a subtract using the adder we also invert the carry out bit (or on some computers they leave it as is). So that carry out of a 1 is a good thing we basically discard it. Had it been a carry out of a zero we would have needed more work. We dont have a carry out so our answer is really 0xE66800.
Very quickly lets see that another way, instead of inverting and adding one lets just use a calculator
111100010000010000000000 - 000010101001110000000000 =
0xF10400 - 0x0A9C00 =
0xE66800
By trying to visualize it I perhaps made it worse. The result of the mantissa subtracting is 111001100110100000000000 (0xE66800), there was no movement in the most significant bit we end up with a 24 bit number in this case with the msbit of a 1. No normalization. To normalize you need to shift the mantissa left or right until the 24 bits lines up with the most significant 1 in that left most position, adjusting the exponent for each bit shift.
Now stripping the 1. bit off the answer we put the parts together
0 10001111 11001100110100000000000
01000111111001100110100000000000
0100 0111 1110 0110 0110 1000 0000 0000
0x47E66800
If you have been following along by writing a program to do this, I did as well. This program violates the C standard by using a union in an improper way. I got away with it with my compiler on my computer, dont expect it to work all the time.
#include <stdio.h>
union
{
float f;
unsigned int u;
} myun;
int main ( void )
{
float a,b,c;
a=123400;
b= 5432;
c=a-b;
myun.f=a; printf("0x%08X %f\n",myun.u,myun.f);
myun.f=b; printf("0x%08X %f\n",myun.u,myun.f);
myun.f=c; printf("0x%08X %f\n",myun.u,myun.f);
return(0);
}
And our result matches the output of the above program, we got a 0x47E66800 doing it by hand
0x47F10400 123400.000000
0x45A9C000 5432.000000
0x47E66800 117968.000000
If you are writing a program to synthesize the floating point math your program can perform the subtract, you dont have to do the invert and add plus one thing, over complicates it as we saw above. If you get a negative result though you need to play with the sign bit, invert your result, then normalize.
So:
1) extract the parts, sign, exponent, mantissa.
2) Align your decimal places by sacrificing mantissa bits from the number with the smallest exponent, shift that mantissa to the right until the exponents match
3) being a subtract operation if the sign bits are the same then you perform a subtract, if the sign bits are different you perform an add of the mantissas.
4) if the result is a zero then your answer is a zero, encode the IEEE value for zero as the result, otherwise:
5) normalize the number, shift the answer to the right or left (The answer can be 25 bits from a 24 bit add/subtract, add/subtract can have a dramatic shift to normalize, either one right or many bits to the left) until you have a 24 bit number with the most significant one left justified. 24 bits is for single precision float. The more correct way to define normalizing is to shift left or right until the number resembles 1.something. if you had 0.001 you would shift left 3, if you had 11.10 you would shift right 1. a shift left increases your exponent, a shift right decreases it. No different than when we converted from integer to float above.
6) for single precision remove the leading 1. from the mantissa, if the exponent has overflowed then you get into building a signaling nan. If the sign bits were different and you performed an add, then you have to deal with figuring out the result sign bit. If as above everything fine you just place the sign bit, exponent and mantissa in the result
Multiply and divide is different, you asked about subract, so that is all I covered.
I'm presuming 0,546875 means 0.546875.
Firstly, to correct/clarify:
0 01111110 10001100000000000000000 = 0011 1111 0100 0110 0000 0000 0000 0000 =
0x3F460000 in IEEE-754 is 0.77343750, not 0.546875.
0.546875 in IEEE-754 is 0x3F0C0000 = 0011 1111 0000 1100 0000 0000 0000 0000 =
0 01111110 00011000000000000000000 = 1 x 1.00011 x 2^(01111110 - 127) =
1.00011 x 2^(126 - 127) = 1.00011 x 2^-1 = (1 + 1/16 + 1/32) x 1/2.
1 10000111 01000101111000000000000 = 1100 0011 1010 0010 1111 0000 0000 0000 =
0xc3a2f000 in IEEE-754 is -325.87500, not -32.875.
-32.875 in IEEE-754 is 0xC2038000 = 1100 0010 0000 0011 1000 0000 0000 0000 =
1 10000100 00000111000000000000000 = -1 x 1.00000111 x 2^(10000100 - 127) =
-1.00000111 x 2^(132 - 127) = -1.00000111 x 2^5 = (1 + 1/64 + 1/128 + 1/256) x -32.
32.875 in IEEE-754 is 0x42038000 = 0100 0010 0000 0011 1000 0000 0000 0000 =
0 10000100 00000111000000000000000 = 1 x 1.00000111 x 2^(10000100 - 127) =
1.00000111 x 2^(132 - 127) = 1.00000111 x 2^5 = (1 + 1/64 + 1/128 + 1/256) x 32.
The subtraction is carried out as follows:
1.00011000 x 1/2
- 1.00000111 x 32
------------------
==>
0.00000100011 x 32
- 1.00000111000 x 32
---------------
==>
-1 x (
1.00000111000 x 32
- 0.00000100011 x 32
---------------
)
==>
-1 x (
1.00000110112 x 32 // borrow
- 0.00000100011 x 32
---------------
)
==>
-1 x (
1.00000110112 x 32
- 0.00000100011 x 32
---------------
1.00000010101 x 32
)
==>
-1.00000010101 x 32 =
-1.00000010101000000000000 x 32 =
-1.00000010101000000000000 x 2^5 =
-1.00000010101000000000000 x 2^(132 - 127) =
-1.00000010101000000000000 x 2^(10000100 - 127)
==>
1 10000100 00000010101000000000000 =
1100 0010 0000 0001 0101 0000 0000 0000 =
0xc2015000
Note that in this example we did not need to handle underflow, which is more complicated.

bitwise operations in c explanation

I have the following code in c:
unsigned int a = 60; /* 60 = 0011 1100 */
int c = 0;
c = ~a; /*-61 = 1100 0011 */
printf("c = ~a = %d\n", c );
c = a << 2; /* 240 = 1111 0000 */
printf("c = a << 2 = %d\n", c );
The first output is -61 while the second one is 240. Why the first printf computes the two's complement of 1100 0011 while the second one just converts 1111 0000 to its decimal equivalent?
You have assumed that an int is only 8 bits wide. This is probably not the case on your system, which is likely to use 16 or 32 bits for int.
In the first example, all the bits are inverted. This is actually a straight inversion, not two's complement:
1111 1111 1111 1111 1111 1111 1100 0011 (32-bit)
1111 1111 1100 0011 (16-bit)
In the second example, when you shift it left by 2, the highest-order bit is still zero. You have misled yourself by depicting the numbers as 8 bits in your comments.
0000 0000 0000 0000 0000 0000 1111 0000 (32-bit)
0000 0000 1111 0000 (16-bit)
Try to avoid doing bitwise operations with signed integers -- often it'll lead you into undefined behavior.
The situation here is that you're taking unsigned values and assigning them to a signed variable. For ~60 this is undefined behavior. You see it as -61 because the bit pattern ~60 is also the two's-complement representation of -61. On the other hand 60 << 2 comes out correct because 240 has the same representation both as a signed and unsigned integer.

C - data type conversion without computer

I have a sample question from test from my school. Which way is the most simple for solving it on paper?
The question:
Run-time system uses two's complement for representation of integers. Data type int has size 32 bits, data type short has size 16 bits. What does printf show? (The answer is ffffe43c)
short int x = -0x1bc4; /* !!! short */
printf ( "%x", x );
lets make it in two steps: 1bc4 = 1bc3 + 1
first of all we make this on long:
0 - 1 = ffffffff
then
ffffffff - 1bc3
this can be done by symbols
ffffffff
-
00001bc3
you will get the result you have
Since your x is negative take the two's complement of it which will yield:
2's(-x) = ~(x) + 1
2's(-0x1BC4) = ~(0x1BC4) + 1 => 0xE43C
0x1BC4 = 0001 1011 1100 0100
~0X1BC4 =1110 0100 0011 1011
+1 = [1]110 0100 0011 1100 (brackets around MSB)
which is how your number is represented internally.
Now %x expects a 32-bit integer so your computer will sign-extend your value which copies the MSB to the upper 16 bits of your value which will yield:
1111 1111 1111 1111 1110 0100 0011 1100 == 0xFFFFE43C

Resources