I'm having trouble understanding why c equals -61 on the following program:
main() {
unsigned int a = 60; // 60 = 0011 1100
unsigned int b = 13; // 13 = 0000 1101
int c = 0;
c = ~a; //-61 = 1100 0011
printf("Line 4 - Value of c is %d\n", c );
}
I do understand how the NOT operator works on 0011 1100 (the solution being 1100 0011). But I'm not sure why the decimal number is increased by 1. Is this some sort of type conversion from unsigned int (from a) into signed int (from c) ?
Conversion from a positive to a negative number in twos complement (the standard signed format) constitutes a bitwise inversion, and adding one.
Note that for simplicity I am using a single signed byte.
So if 60 = 0011 1100
Then c = 1100 0011 + 1
= 1100 0100
And for a signed byte, the most significant bit is negative,
so
c = -128 + 64 + 4 = -60
You need to add 1 to account for the fact that the most significant bit is -128, while the largest positive number is 0111 1111 = 127. All negative numbers have a 1 for -128 which needs to be offset.
This is easy to see when you look at converting 0 to -0. Invert 00000000 and you get 11111111 and adding one gets you back to 00000000. Do the same with 1 to -1 and you get 11111111 - the largest possible negative number.
Related
I recently came across this question and the answer given by #chux - Reinstate Monica.
Quoting lines from their answer, "This is implementation-defined behavior. The assigned value could have been 0 or 1 or 2... Typically, the value is wrapped around ("modded") by adding/subtracting 256 until in range. 100 + 100 -256 --> -56."
Code:
#include <stdio.h>
int main(void)
{
char a = 127;
a++;
printf("%d", a);
return 0;
}
Output: -128
In most of the C compilers, char type takes 1 Byte size and strictly speaking, I'm assuming its 16-bit system and char takes 1 Byte.
When a = 127, its binary representation inside the computer is 0111 1111, increasing it with 1 should yield the value
0111 1111 + 0000 0001 = 1000 0000
which is equal to -0(considering, signed-number representation, where left-most bit represents 0 = + and 1 = -) then why the output is equal to -128?
Is it because of the "INTEGER PROMOTION RULE"? I mean, for this expression a + 1, a gets converted to int (2 Bytes) before the + operation and then its binary representation in the memory becomes 1111 1111 1000 0000 which is equal to -128 and makes sense to the output -128. But then this assumption of mine conflicts with the quoted lines of Chux-Reinstate-Monica about wrapping the values.
1000 0000 which is equal to -0...
Ones' complement has a -0, but most computers use two's complement which does not.
In two's complement notation the left-most bit represents -(coefficient_bit * 2^N-1) i.e. in your case, 1000 0000 the left-most bit represents -(1 * 2^8-1) which is equal to -128 and that's why the output is the same.
Your char is an 8 bit signed integer in which case 1000 0000 is -128. We can test what 1000 0000 is conveniently using the GNU extension which allows binary constants.
char a = 0b10000000;
printf("%d\n", a); // -128
char, in this implementation, is a signed 8-bit integer. Adding 1 to 127 causes integer overflow to -128.
What about integer promotion? Integer promotion happens during the calculation, but the result is still a char. 128 can't fit in our signed 8-bit char, so it overflows to -128.
Integer promotion is demonstrated by this example.
char a = 30, b = 40;
char c = (a * b);
printf("%d\n", c); // -80
char d = (a * b) / 10;
printf("%d\n", d); // 120
char c = (a * b); is -80, but char d = (a * b) / 10; is 120. Why? Shouldn't it be -8? The answer here is integer promotion. The math is done as native integers, but the result must still be stuffed into an 8-bit char. (30 * 40) is 1200 which is 0100 1011 0000. Then it must be stuffed back into an 8 bit signed integer; that's 1011 0000 or -80.
For the other calculation, (30 * 40) / 10 == 1200 / 10 == 120 which fits just fine.
I don't fully understand how the "-" operator affects the following code:
#define COMP(x) ((x) & -(x))
unsigned short a = 0xA55A;
unsigned short b = 0x0400;
Could someone explain what COMP(a) and COMP(b) are and how they are calculated?
(x) & -(x) is equal to the lowest bit set in x when using 2's complement for representing binary numbers.
This means COMP(a) == 0x0002; and COMP(b) == 0x0400;
the "-" sign negative the value of the short parameter in a two's complement way. (in short, turn all 0 to 1, 1 to 0 and then add 1)
so 0xA55A in binary is 1010 0101 0101 1010
then -(0xA55A) in binary is 0101 1010 1010 0110
run & between them will give you 0000 0000 0000 0010
-(x) negates x. Negation in two's complement is the same as ~x+1 (bitflip+1)
as the code below shows:
#include <stdio.h>
#include <stdio.h>
#include <stdint.h>
int prbits(uintmax_t X, int N /*how many bits to print (from the low end)*/)
{
int n=0;
uintmax_t shift=(uintmax_t)1<<(N-1);
for(;shift;n++,shift>>=1) putchar( (X&shift)?'1':'0');
return n;
}
int main()
{
prbits(0xA55A,16),puts("");
//1010010101011010
prbits(~0xA55A,16),puts("");
//0101101010100101
prbits(~0xA55A+1,16),puts("");
//0101101010100110
prbits(-0xA55A,16),puts("");
//0101101010100110 (same)
}
When you bitand a value with its bitfliped value, you get 0. When you bitand a value with its bitfliped value + 1 (=its negated value) you get the first nonzero bit from the right.
Why? If the rightmost bit of ~x is 1, adding 1 to it will yield 0 with carry=1. You repeat this while the rightmost bits are 1 and, zeroing those bits. Once you hit zero (which would be 1 in x, since you're adding 1 to ~x), it gets turned into 1 with carry==0, so the addition ends. To the right you have zeros, to the left you have bitflips. You bitand this with the original and you get the first nonzero bit from the right.
Basically, what COMP does is AND the two operands of which one is in its original form and one of which is a negation of it's form.
How CPUs typically handle signed numbers is using 2's Complement, 2's complement splits the range of a numeric data type to 2, of which (2^n-1) -1 is positive and (2^n-1) is negative.
The MSB (right-most bit) represents the sign of the numeric data
e.g.
0111 -> +7
0110 -> +6
0000 -> +0
1111 -> -1
1110 -> -2
1100 -> -6
So what COMP does by doing an AND on positive and negative version of the numeric data is to get the LSB (Left-most bit) of the first 1.
I wrote some sample code that can help you understand here:
http://coliru.stacked-crooked.com/a/935c3452b31ba76c
How do I subtract IEEE 754 numbers?
For example: 0,546875 - 32.875...
-> 0,546875 is 0 01111110 10001100000000000000000 in IEEE-754
-> -32.875 is 1 10000111 01000101111000000000000 in IEEE-754
So how do I do the subtraction? I know I have to to make both exponents equal but what do I do after that? 2'Complement of -32.875 mantissa and add with 0.546875 mantissa?
Really not any different than you do it with pencil and paper. Okay a little different
123400 - 5432 = 1.234*10^5 - 5.432*10^3
the bigger number dominates, shift the smaller number's mantissa off into the bit bucket until the exponents match
1.234*10^5 - 0.05432*10^5
then perform the subtraction with the mantissas
1.234 - 0.05432 = 1.17968
1.17968 * 10^5
Then normalize (which in this case it is)
That was with base 10 numbers.
In IEEE float, single precision
123400 = 0x1E208 = 0b11110001000001000
11110001000001000.000...
normalize that we have to shift the decimal place 16 places to the left so
1.1110001000001000 * 2^16
The exponent is biased so we add 127 to 16 and get 143 = 0x8F. It is a positive number so the sign bit is a 0 we start to build the IEEE floating point number the leading
1 before the decimal is implied and not used in single precision, we get rid of it and keep the fraction
sign bit, exponent, mantissa
0 10001111 1110001000001000...
0100011111110001000001000...
0100 0111 1111 0001 0000 0100 0...
0x47F10400
And if you write a program to see what a computer things 123400 is you get the same thing:
0x47F10400 123400.000000
So we know the exponent and mantissa for the first operand'
Now the second operand
5432 = 0x1538 = 0b0001010100111000
Normalize, shift decimal 12 bits left
1010100111000.000
1.010100111000000 * 2^12
The exponent is biased add 127 and get 139 = 0x8B = 0b10001011
Put it all together
0 10001011 010100111000000
010001011010100111000000
0100 0101 1010 1001 1100 0000...
0x45A9C00
And a computer program/compiler gives the same
0x45A9C000 5432.000000
Now to answer your question. Using the component parts of the floating point numbers, I have restored the implied 1 here because we need it
0 10001111 111100010000010000000000 - 0 10001011 101010011100000000000000
We have to line up our decimal places just like in grade school before we can subtract so in this context you have to shift the smaller exponent number right, tossing mantissa bits off the end until the exponents match
0 10001111 111100010000010000000000 - 0 10001011 101010011100000000000000
0 10001111 111100010000010000000000 - 0 10001100 010101001110000000000000
0 10001111 111100010000010000000000 - 0 10001101 001010100111000000000000
0 10001111 111100010000010000000000 - 0 10001110 000101010011100000000000
0 10001111 111100010000010000000000 - 0 10001111 000010101001110000000000
Now we can subtract the mantissas. If the sign bits match then we are going to actually subtract if they dont match then we add. They match this will be a subtraction.
computers perform a subtraction by using addition logic, inverting the second operator on the way into the adder and asserting the carry in bit, like this:
1
111100010000010000000000
+ 111101010110001111111111
==========================
And now just like with paper and pencil lets perform the add
1111000100000111111111111
111100010000010000000000
+ 111101010110001111111111
==========================
111001100110100000000000
or do it with hex on your calculator
111100010000010000000000 = 1111 0001 0000 0100 0000 0000 = 0xF10400
111101010110001111111111 = 1111 0101 0110 0011 1111 1111 = 0xF563FF
0xF10400 + 0xF563FF + 1 = 0x1E66800
1111001100110100000000000 =1 1110 0110 0110 1000 0000 0000 = 0x1E66800
A little bit about how the hardware works, since this was really a subtract using the adder we also invert the carry out bit (or on some computers they leave it as is). So that carry out of a 1 is a good thing we basically discard it. Had it been a carry out of a zero we would have needed more work. We dont have a carry out so our answer is really 0xE66800.
Very quickly lets see that another way, instead of inverting and adding one lets just use a calculator
111100010000010000000000 - 000010101001110000000000 =
0xF10400 - 0x0A9C00 =
0xE66800
By trying to visualize it I perhaps made it worse. The result of the mantissa subtracting is 111001100110100000000000 (0xE66800), there was no movement in the most significant bit we end up with a 24 bit number in this case with the msbit of a 1. No normalization. To normalize you need to shift the mantissa left or right until the 24 bits lines up with the most significant 1 in that left most position, adjusting the exponent for each bit shift.
Now stripping the 1. bit off the answer we put the parts together
0 10001111 11001100110100000000000
01000111111001100110100000000000
0100 0111 1110 0110 0110 1000 0000 0000
0x47E66800
If you have been following along by writing a program to do this, I did as well. This program violates the C standard by using a union in an improper way. I got away with it with my compiler on my computer, dont expect it to work all the time.
#include <stdio.h>
union
{
float f;
unsigned int u;
} myun;
int main ( void )
{
float a,b,c;
a=123400;
b= 5432;
c=a-b;
myun.f=a; printf("0x%08X %f\n",myun.u,myun.f);
myun.f=b; printf("0x%08X %f\n",myun.u,myun.f);
myun.f=c; printf("0x%08X %f\n",myun.u,myun.f);
return(0);
}
And our result matches the output of the above program, we got a 0x47E66800 doing it by hand
0x47F10400 123400.000000
0x45A9C000 5432.000000
0x47E66800 117968.000000
If you are writing a program to synthesize the floating point math your program can perform the subtract, you dont have to do the invert and add plus one thing, over complicates it as we saw above. If you get a negative result though you need to play with the sign bit, invert your result, then normalize.
So:
1) extract the parts, sign, exponent, mantissa.
2) Align your decimal places by sacrificing mantissa bits from the number with the smallest exponent, shift that mantissa to the right until the exponents match
3) being a subtract operation if the sign bits are the same then you perform a subtract, if the sign bits are different you perform an add of the mantissas.
4) if the result is a zero then your answer is a zero, encode the IEEE value for zero as the result, otherwise:
5) normalize the number, shift the answer to the right or left (The answer can be 25 bits from a 24 bit add/subtract, add/subtract can have a dramatic shift to normalize, either one right or many bits to the left) until you have a 24 bit number with the most significant one left justified. 24 bits is for single precision float. The more correct way to define normalizing is to shift left or right until the number resembles 1.something. if you had 0.001 you would shift left 3, if you had 11.10 you would shift right 1. a shift left increases your exponent, a shift right decreases it. No different than when we converted from integer to float above.
6) for single precision remove the leading 1. from the mantissa, if the exponent has overflowed then you get into building a signaling nan. If the sign bits were different and you performed an add, then you have to deal with figuring out the result sign bit. If as above everything fine you just place the sign bit, exponent and mantissa in the result
Multiply and divide is different, you asked about subract, so that is all I covered.
I'm presuming 0,546875 means 0.546875.
Firstly, to correct/clarify:
0 01111110 10001100000000000000000 = 0011 1111 0100 0110 0000 0000 0000 0000 =
0x3F460000 in IEEE-754 is 0.77343750, not 0.546875.
0.546875 in IEEE-754 is 0x3F0C0000 = 0011 1111 0000 1100 0000 0000 0000 0000 =
0 01111110 00011000000000000000000 = 1 x 1.00011 x 2^(01111110 - 127) =
1.00011 x 2^(126 - 127) = 1.00011 x 2^-1 = (1 + 1/16 + 1/32) x 1/2.
1 10000111 01000101111000000000000 = 1100 0011 1010 0010 1111 0000 0000 0000 =
0xc3a2f000 in IEEE-754 is -325.87500, not -32.875.
-32.875 in IEEE-754 is 0xC2038000 = 1100 0010 0000 0011 1000 0000 0000 0000 =
1 10000100 00000111000000000000000 = -1 x 1.00000111 x 2^(10000100 - 127) =
-1.00000111 x 2^(132 - 127) = -1.00000111 x 2^5 = (1 + 1/64 + 1/128 + 1/256) x -32.
32.875 in IEEE-754 is 0x42038000 = 0100 0010 0000 0011 1000 0000 0000 0000 =
0 10000100 00000111000000000000000 = 1 x 1.00000111 x 2^(10000100 - 127) =
1.00000111 x 2^(132 - 127) = 1.00000111 x 2^5 = (1 + 1/64 + 1/128 + 1/256) x 32.
The subtraction is carried out as follows:
1.00011000 x 1/2
- 1.00000111 x 32
------------------
==>
0.00000100011 x 32
- 1.00000111000 x 32
---------------
==>
-1 x (
1.00000111000 x 32
- 0.00000100011 x 32
---------------
)
==>
-1 x (
1.00000110112 x 32 // borrow
- 0.00000100011 x 32
---------------
)
==>
-1 x (
1.00000110112 x 32
- 0.00000100011 x 32
---------------
1.00000010101 x 32
)
==>
-1.00000010101 x 32 =
-1.00000010101000000000000 x 32 =
-1.00000010101000000000000 x 2^5 =
-1.00000010101000000000000 x 2^(132 - 127) =
-1.00000010101000000000000 x 2^(10000100 - 127)
==>
1 10000100 00000010101000000000000 =
1100 0010 0000 0001 0101 0000 0000 0000 =
0xc2015000
Note that in this example we did not need to handle underflow, which is more complicated.
I have a sample question from test from my school. Which way is the most simple for solving it on paper?
The question:
Run-time system uses two's complement for representation of integers. Data type int has size 32 bits, data type short has size 16 bits. What does printf show? (The answer is ffffe43c)
short int x = -0x1bc4; /* !!! short */
printf ( "%x", x );
lets make it in two steps: 1bc4 = 1bc3 + 1
first of all we make this on long:
0 - 1 = ffffffff
then
ffffffff - 1bc3
this can be done by symbols
ffffffff
-
00001bc3
you will get the result you have
Since your x is negative take the two's complement of it which will yield:
2's(-x) = ~(x) + 1
2's(-0x1BC4) = ~(0x1BC4) + 1 => 0xE43C
0x1BC4 = 0001 1011 1100 0100
~0X1BC4 =1110 0100 0011 1011
+1 = [1]110 0100 0011 1100 (brackets around MSB)
which is how your number is represented internally.
Now %x expects a 32-bit integer so your computer will sign-extend your value which copies the MSB to the upper 16 bits of your value which will yield:
1111 1111 1111 1111 1110 0100 0011 1100 == 0xFFFFE43C
I'm trying to print a char array of 4 elements as a float number. The compiler(gcc) won't allow me to write z.s={'3','4','j','k'}; in the main() function, why?
#include <stdio.h>
union n{
char s[4];
float x;
};
typedef union n N;
int main(void)
{
N z;
z.s[0]='3';
z.s[1]='4';
z.s[2]='j';
z.s[3]='k';
printf("f=%f\n",z.x);
return 0;
}
The output of the program above is: f=283135145630880207619489792.000000 , a number that is much larger than a float variable can store; the output should be, in scientific notation, 4.1977085E-8.
So what's wrong?
z.s={'3','4','j','k'}; would assign one array to another. C doesn't permit that, though you could declare the second and memcpy to the first.
The largest finite value that a single-precision IEEE float can store is 3.4028234 × 10^38, so 283135145630880207619489792.000000, which is approximately 2.8313514 × 10^26 is most definitely in range.
Assuming your chars are otherwise correct, the knee-jerk guess would be that you've got your endianness wrong.
EDIT:
34jk if taken from left to right, as on a big-endian machine is:
0x33 0x34 0x6a 0x6b
= 0011 0011, 0011 0100, 0110 1010, 0110 1011
So:
sign = 0
exponent = 011 0011 0 = 102 (dec), or -25 allowing for offset encoding
mantissa = [1] 011 0100 0110 1010 0110 1011 = 11823723 / (2^23)
So the value would be about 4.2 × 10^-8, which is what you want.
In little endian:
0x6b 0x6a 0x34 0x33
= 0110 1011, 0110 1010, 0011 0100, 0011 0011
sign = 0
exponent = 110 1011 0 = 214 (dec) => 87
mantissa = [1]110 1010 0011 0100 0011 0011 = 15348787 / (2^23)
So the value would be about 2.8 * 10^26, which is what your program is outputting. It's a safe conclusion you're on a little endian machine.
Summary then: byte order is different between machines. You want to use your bytes the other way around — try kj43.
What you actually see is {'k' 'j' '4' '3'}