Tmax and Tmin of two's complement [duplicate] - c

This question already has answers here:
What is the maximum and minimum values can be represented with 5-digit number? in 2's complement representation
(2 answers)
Closed 5 years ago.
I understand to get two's complement of an integer, we first flip the bits and add one to it but I'm having hard time figuring out Tmax and Tmin?
In a 8-bit machine using two's compliment for signed integers how would I find the maximum and minimum integer values it can hold?
would tmax be =01111111? and tmin =11111111?

You are close.
The minimum value of a signed integer with n bits is found by making the most significant bit 1 and all the others 0.
The value of this is -2^(n-1).
The maximum value of a signed integer with n bits is found by making the most significant bit 0 and all the others 1.
The value of this is 2^(n-1)-1.
For 8 bits the range is -128...127, i.e., 10000000...01111111.
Read about why this works at Wikipedia.

Due to an overflow that would lead to a negative zero, the binary representation for the smallest signed integer using twos complement representation is usually a one bit for the sign, followed by all zero bits.
If you divide the values in an unsigned type into two groups, one group for negative and another positive, then you'll end up with two zeros (a negative zero and a positive zero). This seems wasteful, so many have decided to give that a value. What value should it have? Well, it:
has a 1 for a sign bit, implying negative;
has a 1 for the most significant bit, implying 2width-1 (128, in your example)...
Combining these points to reinterpret that value as -128 seems to make sense.

Related

Why Negating An Integer In Two's Complement and Signed Magnitude Can Result In An Overflow?

I dont quite understand, if someone can provide examples to help me understand better. It would be greatly appreciated.
On a system using 2's complement and a 32-bit int, the range of values it can hold is -2147483648 to 2147483647.
If you were to negate the smallest possible int, i.e. -2147483648, the result would be 2147483648 which is out of range.
A sign-and-magnitude system cannot overflow in this way because 1 bit is reserved solely as a sign bit while the remaining bits (assuming no padding) are the value.
You can't get an overflow in sign-magnitude format. Negating the number simply inverts the sign and keeps the magnitude the same.
In two's complement, you get an overflow if you try to negate the most negative value, because there's always 1 more negative value than positive values. For instance, in 8 bits the range of values is -128 to 127. Since there's no 128, you get an overflow if you try to negate -128.
As 2's complement numbers do not have negative zero - they have only one zero so the number of the negative numbers is larger than the number of the positive numbers.
If you negate the minimum negative number you will get number which cannot be represented as a positive number of the same length.

If a C signed integer type is stored in 22 bits, what is the smallest value it can store?

I am learning about data allocation and am a little confused.
If you are looking for the smallest or greatest value that can be stored in a certain number of bits then does it matter what the data type is?
Wouldn't the smallest or biggest number that could be stored in 22 bits would be 22 1's positive or negative? Is the first part of this question a red herring? Wouldn't the smallest value be -4194303?
A 22-bit data element can store any one of 2^22 distinct values. What those values actually mean is a matter of interpretation. That interpretation may be imposed by a compiler or some piece of hardware, or may be under the control of the programmer, and suit some specific application.
A simple interpretation, of course, would be to treat the 22 bits as an unsigned integer, with values from 0 to (2^22)-1. A two's-complement, signed integer is a slightly more sophisticated interpretation of the same bits. Or you (or the compiler, or CPU) could divide the 22 bits up into a mantissa and exponent, and store a range of decimal numbers. The range and precision would depend on how many bits were allocated to the mantissa, and how many to the exponent.
Or you could split the bits up and use some for the numerator and some for the denominator of a fraction. Or, in fact, anything else.
Some of these interpretations of the bits are built into hardware, some are implemented by compilers or libraries, and some are entirely under the programmer's control. Not all programming languages allow the programmer to manipulate individual bits in a natural or efficient way, but some do. Sometimes, using a highly unconventional interpretation of binary data can give significant efficiency gains, but usually at the expense of readability and maintainability.
So, yes, it matters what the data type is.
There is no law (of humans, logic, or nature) that says bits must represent numbers only in the pattern that one of the bits represents 20, another represents 21, another represents 22, and so on (and the number represented is the sum of those values for the bits that are 1). We have choices about how to use bits to represent numbers, including:
The bits do use that pattern, and so 22 bits can represent any number from 0 to the sum of 20 + 21 + 22 + … + 221 = 222 − 1 = 4,194,303. The smallest representable value is 0.
The bits mostly use that pattern, but it is modified so that one bit represents −221 instead of +221. This is called two’s complement, and the smallest value representable is −221 = −2,097,152.
The bits represent numbers as described above except the represent value is divided by 1000. This is called fixed-point. In the first case, the value represent by all bits 1 would be 4194.303, but the smallest representable value would be 0. With a combination of two’s complement and fixed-point scaled by 1/1000, the smallest representable value would be −2097.152.
The bits represent a floating-point number, where one bit represents a sign (+ or −), certain bits represent an exponent and other information, and the remaining bits represent a significand. In common floating-point formats, when all the bits in that exponent-and-other field are 1s and the significand field bits are 0s, the number represents +∞ or −∞, according to the sign bit. In such a format, the smallest representable value is −∞.
As an example, we could designate patterns of bits to represent numbers arbitrarily. We could say that 0000000000000000000000 represents 34, 0000000000000000000001 represents −15, 0000000000000000000010 represents 5, 0000000000000000000011 represents 3+4i, and so on. The smallest representable value would be whichever of those arbitrary values is smallest.
So what the smallest representable value is depends entirely on the type, since the “type” of the data includes the scheme by which the bits represent values.
If the type is a “signed integer type,” there is still some flexibility in the representation. Most modern C implementations (and other programming languages) use the two’s complement scheme described above. But the C standard still allows two other schemes:
One’s complement: If the first bit is 1, the value represented is negative, and its magnitude is given by complementing the remaining bits and interpreting them as binary. Using six bits for an example, 101001 would be negative with the magnitude of 101102 = 22, so −22.
Sign-and-magnitude: If the first bit is 1, the value represented is negative, and its magnitude is given by interpreting the remaining bits as binary. Using the same bits, 101001 would negative with the magnitude of 010012 = 9, so −9.
In both one’s complement and sign-and-magnitude, the smallest representable value with 22 bits is −(221−1) = −2,097,151.
To stretch the question further, C defines standard integer types but allows implementations to extend the language. An implementation could define some “signed integer type” with an arbitrary scheme for representing numbers, as long as that scheme included a sign, to make the name correct.
Without going into technical jargon about doing maths with Two's compliment, I'll try to explain in easy words.
First you need to raise 2 with power of 'number of bits'.
Let's take an example of an 8 bit type,
An un-signed 8-bit integer can store 2 ^ 8 = 256 values.
Since values are indexed starting from 0, so values range from 0 - 255.
Assuming you want to store signed values, so you need to get the half (simply divide it by 2),
256 / 2 = 128.
Remember we start from zero,
You might be rightly thinking you can store -127 to 127 starting from zero on both sides.
Just know that there is only zero (there is nothing like +0 or -0),
so you start with zero to positive half. 0 to 127,
that leaves you with negative half starting from -1 to -128
Hence the range will be -128 to 127.
For a 22 bit signed integer you can do the math,
2 ^ 22 = 4,194,304
4194304 / 2 = 2,097,152
-1 for positive side,
range will be, -2097152 to 2097151.
To answer your question,
-2097152 would be the smallest number you can store.
Thanks everyone for the replies. I figured it out with the help of all of your info but I will explain the answer to show exactly what gaps of knowledge I had that lead to my misunderstanding.
The data type does matter in this question because for signed data types the first bit is used to represent whether or not a binary number is positive or negative. 0111 = 7 and 1111 = -7
sign int and unsigned int use the same number of bits, 32 bits. Since an unsigned int is unsigned: the first bit isn't used to represent positive or negative so it can represent a larger number with that extra bit. 1111 converted to an unsigned int is 15 whereas with the signed int it was -7 since the furthest left bit represents the sign: 1 is negative and 0 is positive.
Now to answer "If a C signed integer type is stored in 22 bits, what is the smallest value it can store?":
If you convert binary to decimal you get 1111111111111111111111 = 4194304
This decimal value -1 is the maximum value an unsigned could hold. Since our data type is signed it has to use one less bit for the number value since the first bit represents the sign. This gives us -2097152.
Thanks again, everyone.

Why the range of int is -32768 to 32767?

Why is the range of any data type greater on negative side as compare to positive side?
For example, in case of integer:
In Turbo C its range is -32768 to 32767 and for Visual Studio it is -2147483648 to 2147483647.
The same happens to other data types.
Because of how numbers are stored. Signed numbers are stored using something called "two's complement notation".
Remember all variables have a certain amount of bits. If the most significant one of them, the one on the left, is a 0, then the number is non-negative (i.e., positive or zero), and the rest of the bits simply represent the value.
However, if the leftmost bit is a 1, then the number is negative. The real value of the number can be obtained by subtracting 2^n from the whole number represented (as an unsigned quantity, including the leftmost 1), where n is the amount of bits the variable has.
Since only n - 1 bits are left for the actual value (the "mantissa") of the number, the possible combinations are 2^(n - 1). For positive/zero numbers, this is easy: they go from 0, to 2^(n - 1) - 1. That -1 is to account for zero itself -- for instance, if you only had four possible combinations, those combinations would represent 0, 1, 2, and 3 (notice how there's four numbers): it goes from 0 to 4 - 1.
For negative numbers, remember the leftmost bit is 1, so the whole number represented goes between 2^(n - 1) and (2^n) - 1 (parentheses are very important there!). However, as I said, you have to take 2^n away to get the real value of the number. 2^(n - 1) - 2^n is -(2^(n - 1)), and ((2^n) - 1) - 2^n is -1. Therefore, the negative numbers' range is -(2^(n - 1)) to -1.
Put all that together and you get -2^(n - 1) to 2^(n - 1) - 1. As you can see, the upper bound gets a -1 that the lower bound doesn't.
And that's why there's one more negative number than positive.
The minimum range required by C is actually -32767 through 32767, because it has to cater for two's complement, ones' complement and sign/magnitude encoding for negative numbers, all of which the C standard allows. See Annex E, Implementation limits of C11 (and C99) for details on the minimum ranges for data types.
Your question pertains only to the two's complement variant and the reason for that is simple. With 16 bits, you can represent 216 (or 65,536) different values and zero has to be one of those. Hence there are an odd number of values left, of which the majority (by one) are negative values:
1 thru 32767 = 37267 values
0 = 1 value
-1 thru -32768 = 32768 values
-----
65536 values
Both ones' complement and sign-magnitude encoding allow for a negative-zero value (as well as positive-zero), meaning that one less bit pattern is available for the non-zero numbers, hence the reduced minimum range you find in the standard.
1 thru 32767 = 37267 values
0 = 1 value
-0 = 1 value
-1 thru -32767 = 32767 values
-----
65536 values
Two's complement is actually a nifty encoding scheme because positive and negative numbers can be added together with the same simple hardware. Other encoding schemes tend to require more elaborate hardware to do the same task.
For a fuller explanation on how two's complement works, see the wikipedia page.
Because the range includes zero. The number of different values an n-bit integer can represent is 2^n. That means a 16-bit integer can represent 65536 different values. If it's an unsigned 16-bit integer, it can represent 0-65535 (inclusive). The convention for signed integers is to represent -32768 to 32767, -214748368 to 214748367, etc.
With 2s complement negative numbers are defined as the bitwise not plus 1 this reduces the range of possible numbers in a given number of bits by 1 on the negative side.
Ordinarily, due to using a two's complement system for storing negative values, when you flip the sign bit on an integer it's biased toward the negative.
The range should be: -(2^(n-1)) - ((2^(n-1)-1)

Overflow in bitwise subtraction using two's complement

When performing bitwise subtraction using two's complement, how does one know when the overflow should be ignored? Several websites I read stated that the overflow is simply ignored, but that does not always work -- the overflow is necessary for problems like -35 - 37, as an extra digit is needed to express the answer of -72.
EDIT: Here's an example, using the above equation.
35 to binary -> 100011, find two's complement to make it negative: 011101
37 to binary -> 100101, find two's complement to make it negative: 011011
Perform addition of above terms (binary equivalent of -35 - 37):
011101
011011
------
111000
Take two's complement to convert back to positive: 001000
The above is what many websites (including academic ones) say the answer should be, as you ignore overflow. This is clearly incorrect, however.
An overflow happens when the result cannot be represented in the target data type. The value -72 can be represented in a char, which is a signed 8-bit quantity... there is no overflow in your example. Perhaps you are thinking about a borrow while doing bitwise subtraction... when you subtract a '1' from a '0' you need to borrow from the next higher order bit position. You cannot ignore borrows when doing subtraction.
-35 decimal is 11011101 in two's complement 8-bit
+37 decimal is 00100101 in two's complement 8-bit
going right to left from least significant to most significant bit you can subtract each bit in +37 from each bit in -35 until you get to bit 5 (counting starts at bit 0 on the right). At bit position 5 you need to subtract '1' from '0' so you need to borrow from bit position 6 (the next higher order bit) in -35, which happens to be a '1' prior to the borrow. The result looks like this
-35 decimal is 11011101 in two's complement 8-bit
+37 decimal is 00100101 in two's complement 8-bit
--------
-72 decimal is 10111000 in two's complement 8-bit
The result is negative, and your result in 8-bit two's complement has the high order bit set (bit 7)... which is negative, so there is no overflow.
Update:I think I see where the confusion is, and I claim that the answer here Adding and subtracting two's complement is wrong when it says you can discard the carry (indicates overflow). In that answer they do subtraction by converting the second operand to negative using two's complement and then adding. That's fine - but a carry doesn't represent overflow in that case. If you add two positive numbers in N bits (numbered 0 to N-1) and you consider this unsigned arithmetic range 0 to (2^N)-1 and you get a carry out of bit position N-1 then you have overflow - the sum of two positive numbers (interpreted as unsigned to maximize the range of representable positive numbers) should not generate a carry out of the highest order bit (bit N-1). So when adding two positive numbers you identify overflow by saying
there must be no carry out of bit N-1 when you interpret them as unsigned and
the result in bit N-1 must be zero when interpreted as signed (two's complement)
Note, however, that processors don't distinguish between signed and unsigned addition/subtraction... they set the overflow flag to indicate that if you are interpreting your data as signed then the result could not be represented (is wrong).
Here is a very detailed explanation of carry and overflow flag. The takeaway from that article is this
In unsigned arithmetic, watch the carry flag to detect errors.
In unsigned arithmetic, the overflow flag tells you nothing interesting.
In signed arithmetic, watch the overflow flag to detect errors.
In signed arithmetic, the carry flag tells you nothing interesting.
This is consistent with the definition of arithmetic overflow in Wikipedia which says
Most computers distinguish between two kinds of overflow conditions. A carry occurs when the result of an addition or subtraction, considering the operands and result as unsigned numbers, does not fit in the result. Therefore, it is useful to check the carry flag after adding or subtracting numbers that are interpreted as unsigned values. An overflow proper occurs when the result does not have the sign that one would predict from the signs of the operands (e.g. a negative result when adding two positive numbers). Therefore, it is useful to check the overflow flag after adding or subtracting numbers that are represented in two's complement form (i.e. they are considered signed numbers).

What does signed and unsigned values mean?

What does signed mean in C? I have this table to show:
This says signed char 128 to +127. 128 is also a positive integer, so how can this be something like +128 to +127? Or do 128 and +127 have different meanings? I am referring to the book Apress Beginning C.
A signed integer can represent negative numbers; unsigned cannot.
Signed integers have undefined behavior if they overflow, while unsigned integers wrap around using modulo.
Note that that table is incorrect. First off, it's missing the - signs (such as -128 to +127). Second, the standard does not guarantee that those types must fall within those ranges.
By default, numerical values in C are signed, which means they can be both negative and positive. Unsigned values on the other hand, don't allow negative numbers.
Because it's all just about memory, in the end all the numerical values are stored in binary. A 32 bit unsigned integer can contain values from all binary 0s to all binary 1s. When it comes to 32 bit signed integer, it means one of its bits (most significant) is a flag, which marks the value to be positive or negative. So, it's the interpretation issue, which tells that value is signed.
Positive signed values are stored the same way as unsigned values, but negative numbers are stored using two's complement method.
If you want to write negative value in binary, first write positive number, next invert all the bits and last add 1. When a negative value in two's complement is added to a positive number of the same magnitude, the result will be 0.
In the example below lets deal with 8-bit numbers, because it'll be simple to inspect:
positive 95: 01011111
negative 95: 10100000 + 1 = 10100001 [positive 161]
0: 01011111 + 10100001 = 100000000
^
|_______ as we're dealing with 8bit numbers,
the 8 bits which means results in 0
The table is missing the minuses. The range of signed char is -128 to +127; likewise for the other types on the table.
It was a typo in the book; signed char goes from -128 to 127.
Signed integers are stored using the two's complement representation, in which the first bit is used to indicate the sign.
In C, chars are just 8 bit integers. This means that they can go from -(2^7) to 2^7 - 1. That's because we use the 7 last bits for the number and the first bit for the sign. 0 means positive and 1 means negative (in two's complement representation).
The biggest positive 7 bit number is (01111111)b = 2^7 - 1 = 127.
The smallest negative 7 bit number is (11111111)b = -128
(because 11111111 is the two's complement of 10000000 = 2^7 = 128).
Unsigned chars don't have signs so they can use all the 8 bits. Going from (00000000)b = 0 to (11111111)b = 255.
Signed numbers are those that have either + or - appended with them.
E.g +2 and -6 are signed numbers.
Signed Numbers can store both positive and negative numbers thats why they have bigger range.
i.e -32768 to 32767
Unsigned numbers are simply numbers with no sign with them. they are always positive. and their range is from 0 to 65535.
Hope it helps
Signed usually means the number has a + or - symbol in front of it. This means that unsigned int, unsigned shorts, etc cannot be negative.
Nobody mentioned this, but range of int in table is wrong:
it is
-2^(31) to 2^(31)-1
i.e.,
-2,147,483,648 to 2,147,483,647
A signed integer can have both negative and positive values. While a unsigned integer can only have positive values.
For signed integers using two's complement , which is most commonly used, the range is (depending on the bit width of the integer):
char s -> range -128-127
Where a unsigned char have the range:
unsigned char s -> range 0-255
First, your table is wrong... negative numbers are missing. Refering to the type char.... you can represent at all 256 possibilities as char has one byte means 2^8. So now you have two alternatives to set ur range. either from -128 to +128 or 0 to 255. The first one is a signed char the second a unsigned char. If you using integers be aware what kind of operation system u are using. 16 bit ,32 bit or 64 bit. Int (16 bit,32 bit,64 bit). char has always just 8 bit value.
It means that there will likely be a sign ( a symbol) in front of your value (+12345 || -12345 )

Resources