This question already has answers here:
What is “two's complement”?
(24 answers)
Closed 9 months ago.
I am new to c.
My Question.
Why do some numbers end up negative when the value was positive?
Here is an explanation for the output you are seeing.
1000 in binary is 1111101000 (10 bits) and is stored in an int (signed 32 bits)
When you cast that to an unsigned char (that has 8 bits), the top bits get "cut off".
So you get: 11101000 which is 232 in decimal.
As a (signed) char, the bits get interpreted as a negative number because the first (sign) bit is set, which in this case is -24.
When you remove the sign bit, 1101000 = 104
The "value" of the MSB is 128, so your computer does 104 - 128 = -24.
(See https://en.wikipedia.org/wiki/Two%27s_complement)
A long has the same or more bits as an int so the value does not change.
This question already has answers here:
What is the maximum and minimum values can be represented with 5-digit number? in 2's complement representation
(2 answers)
Closed 5 years ago.
I understand to get two's complement of an integer, we first flip the bits and add one to it but I'm having hard time figuring out Tmax and Tmin?
In a 8-bit machine using two's compliment for signed integers how would I find the maximum and minimum integer values it can hold?
would tmax be =01111111? and tmin =11111111?
You are close.
The minimum value of a signed integer with n bits is found by making the most significant bit 1 and all the others 0.
The value of this is -2^(n-1).
The maximum value of a signed integer with n bits is found by making the most significant bit 0 and all the others 1.
The value of this is 2^(n-1)-1.
For 8 bits the range is -128...127, i.e., 10000000...01111111.
Read about why this works at Wikipedia.
Due to an overflow that would lead to a negative zero, the binary representation for the smallest signed integer using twos complement representation is usually a one bit for the sign, followed by all zero bits.
If you divide the values in an unsigned type into two groups, one group for negative and another positive, then you'll end up with two zeros (a negative zero and a positive zero). This seems wasteful, so many have decided to give that a value. What value should it have? Well, it:
has a 1 for a sign bit, implying negative;
has a 1 for the most significant bit, implying 2width-1 (128, in your example)...
Combining these points to reinterpret that value as -128 seems to make sense.
This question already has answers here:
Assigning negative numbers to an unsigned int?
(14 answers)
Closed 6 years ago.
This should be a pretty simple question but I can't seem to find the answer in my textbook and can't find the right keywords to find it online.
What does it mean when you have a negative sign in front of an unsigned int?
Specifically, if x is an unsigned int equal to 1, what is the bit value of -x?
Per the C standard, arithmetic for unsigned integers is performed modulo 2bit width. So, for a 32-bit integer, the negation will be taken mod 232 = 4294967296.
For a 32-bit number, then, the value you'll get if you negate a number n is going to be 0-n = 4294967296-n. In your specific case, assuming unsigned int is 32 bits wide, you'd get 4294967296-1 = 4294967295 = 0xffffffff (the number with all bits set).
The relevant text in the C standard is in §6.2.5/9:
a result that cannot be represented by the resulting unsigned integer type is reduced modulo the number that is one greater than the largest value that can be represented by the resulting type
It will overflow in the negative direction, i.e. if your int is 16 bits x will be 65535. The bit value will be 1111111111111111 (16 ones)
If int is 32 bits, x will be 4294967295
when you apply the "-", the Two's complement of the integer is stored in variable. see here for details
I am trying to work my way thru binary numbers and normalized values. I am confused because we were taught that numbers are represented as 8 bit values. We would do examples with 8 bits where 1 bit was the sign, the next 4 bits were the exponent and then the last 3 were for the number. It was going ok then we jump into 32 bit numbers where the 1 bit is the sign the next 8 are the exponent and the final 23 are the remaining number.
My question is why the different representations? Sometimes numbers are 8 bits sometimes 32 bits? why not make them 3 bits then sometimes 13 bits? Or make them 40 bits and 64 bits? There appears to be no rhyme or reason. Are we dealing with 8 bits when we talk about numbers or 32? Here is an example.
https://www.youtube.com/watch?v=vi5RXPBO-8E
Any help explaining would help. Right now I don't know if I should study the material based on 8 bits or 32 bits with the 1st bit a sign, second 8 the exponent and last 23 the actual number. Very confused.
I assume you were taught how floating point numbers were represented using 8-bits as it's much easier to do the math with smaller numbers; however, you can only represent so many numbers with 8-bits (256 different numbers to be exact).
As you said you learned how floating point numbers work with 8 bits.
SEEEENNN -- Where s is the sign bit, E is the exponent and N is the number/ significand bits
the sign of the number is simply -1 to the sign bit
the exponent is the eponent bets represented as a signed integer or as an unsigned integer minus the representations bias
the significand is 1 + SUM i = 1 to p N[i]*2^(-i) where p is the precision of the number/ number of bits for the significand
the value can be computed as:
-1^S * 2^(exponent) * significand
As a more concrete example (exponent bias of 2^(4-1)-1 = 7)
0 1000 101
s = 0
exponent = 8 - 7 = 1
significand = 1 + 0.5 * 1 + 0.25 * 0 + 0.125 * 1 = 1.625
value = (-1)^0 + 1.625 * 2^1 = 3.25
The same representation scheme can be applied to any number of bits. In that regard it is fairly arbitrary to chose the value 8 or 32.
32 and 64 bits are often chosen to represent floating point numbers in binary format because they are powers of 2, are easily stored in memory (integer number of bytes) and computer ALU's are designed to work with 32/64 bit numbers.
In C a float is 32 bits and a double is 64 bits.
You can read more on IEEE-754 floating point representation. Wikipedia has a good explanation of how it works here.
I'm having a hard time grasping data types in C. I'm going through a C book and one of the challenges asks what the maximum and minimum number a short can store.
Using sizeof(short); I can see that a short consumes 2 bytes. That means it's 16 bits, which means two numbers since it takes 8 bits to store the binary representation of a number. For example, 9 would be 00111001 which fills up one bit. So would it not be 0 to 99 for unsigned, and -9 to 9 signed?
I know I'm wrong, but I'm not sure why. It says here the maximum is (-)32,767 for signed, and 65,535 for unsigned.
short int, 2 Bytes, 16 Bits, -32,768 -> +32,767 Range (16kb)
Think in decimal for a second. If you have only 2 digits for a number, that means you can store from 00 to 99 in them. If you have 4 digits, that range becomes 0000 to 9999.
A binary number is similar to decimal, except the digits can be only 0 and 1, instead of 0, 1, 2, 3, ..., 9.
If you have a number like this:
01011101
This is:
0*128 + 1*64 + 0*32 + 1*16 + 1*8 + 1*4 + 0*2 + 1*1 = 93
So as you can see, you can store bigger values than 9 in one byte. In an unsigned 8-bit number, you can actually store values from 00000000 to 11111111, which is 255 in decimal.
In a 2-byte number, this range becomes from 00000000 00000000 to 11111111 11111111 which happens to be 65535.
Your statement "it takes 8 bits to store the binary representation of a number" is like saying "it takes 8 digits to store the decimal representation of a number", which is not correct. For example the number 12345678901234567890 has more than 8 digits. In the same way, you cannot fit all numbers in 8 bits, but only 256 of them. That's why you get 2-byte (short), 4-byte (int) and 8-byte (long long) numbers. In truth, if you need even higher range of numbers, you would need to use a library.
As long as negative numbers are concerned, in a 2's-complement computer, they are just a convention to use the higher half of the range as negative values. This means the numbers that have a 1 on the left side are considered negative.
Nevertheless, these numbers are congruent modulo 256 (modulo 2^n if n bits) to their positive value as the number really suggests. For example the number 11111111 is 255 if unsigned, and -1 if signed which are congruent modulo 256.
The reference you read is correct. At least, for the usual C implementations where short is 16 bits - that's not actually fixed in the standard.
16 bits can hold 2^16 possible bit patterns, that's 65536 possibilities. Signed shorts are -32768 to 32767, unsigned shorts are 0 to 65535.
This is defined in <limits.h>, and is SHRT_MIN & SHRT_MAX.
Others have posted pretty good solutions for you, but I don't think they have followed your thinking and explained where you were wrong. I will try.
I can see that a short consumes 2 bytes. That means it's 16 bits,
Up to this point you are correct (though short is not guaranteed to be 2 bytes long like int is not guaranteed to be 4 — the only guaranteed size by standard (if I remember correctly) is char which should always be 1 byte wide).
which means two numbers since it takes 8 bits to store the binary representation of a number.
From here you started to drift a bit. It doesn't really take 8 bits to store a number. Depending on a number, it may take 16, 32 64 or even more bits to store it. Dividing your 16 bits into 2 is wrong. If not a CPU implementation specifics, we could have had, for example, 2 bit numbers. In that case, those two bits could store values like:
00 - 0 in decimal
01 - 1 in decimal
10 - 2 in decimal
11 - 3 in decimal
To store 4, we need 3 bits. And so the value would "not fit" causing an overflow. Same applies to 16-bit number. For example, say we have unsigned "255" in decimal stored in 16-bits, the binary representation would be 0000000011111111. When you add 1 to that number, it becomes 0000000100000000 (256 in decimal). So if you had only 8 bits, it would overflow and become 0 because the most significant bit would have been discarded.
Now, the maximum unsigned number you can in 16 bits memory is — 1111111111111111, which is 65535 in decimal. In other words, for unsigned numbers - set all bits to 1 and that will yield you the maximum possible value.
For signed numbers, however, the most significant bit represents a sign — 0 for positive and 1 for negative. For negative, the maximum value is 1000000000000000, which is -32678 in base 10. The rules for signed binary representation are well described here.
Hope it helps!
The formula to find the range of any unsigned binary represented number:
2 ^ (sizeof(type)*8)