I am writing a function with parameters:
int nbit2s(long int x, long int n){
}
I am looking to take in 64 bit number x, and find if a 2 bit representation of n bit length is possible. I am however restricted to using only bitwise operators and excluded from using operators such as >= <=, and conditional statements
For example, nbit2s(5,3) returns 0 because the representation is impossible.
I'm not looking for any code but just for ideas, so far my idea has been:
Take in number n and convert it to it's binary representation.
Left Shift the binary representation 64-n times to get MSB and store that in a variable shift
3.Shift to the right 64-n to get leading bit and store in shift
XOR original number with W, if 1 then TRUE, 0 then FALSE.
I feel like this is along the right lines, but if someone could explain perhaps a better way to do it or any mistakes i may have made, that would be great.
If I am understanding your question correctly, you would like to determine if the integer x can be represented as a binary number with n or less bits.
An easy way to test for this is to use the bitwise AND operator to mask out all bits above a maximum value, and then see if the masked integer is the same as the original.
Example:
bool nbit2s(uint64_t x, int n) {
return (x & (1ULL << n) - 1) == x;
}
A solution for positive numbers would be to just shift right by n. If it's still non-zero, then it needs more than n bits to represent it.
But since the argument is signed and you're talking about 2 complement representation, you'll probably have to cast it as unsigned representation to avoid sign extension.
Related
Assignment operators are not something I expected to struggle with. Everything in this section so far is familiar, but the way they explain it makes it seem foreign. I think it's the bitwise operators I'm confused about.
Anyway here is the part I don't get.
The function bitcount counts the number of 1-bits in its integer argument.
/*bitcount: count 1 bits in x */
int bitcount (unsigned x)
{
int b;
for(b=0; x != 0; x >>= 1) {
if(x & 01)
b++;
return b;
}
Declaring the argument x to be unsigned ensures that when it is right-shifted, vacated bits will be filled with zeros, not sign bits, regardless of the machine the program is run on.
Not that I really understand the code, but this last sentence is confusing me the most. What does this mean? Does it mean "ones" by "sign bits" and does this have to do with twos complement?
I really had trouble getting through the bitwise operations. Can anyone recommend some comprehensive material to supplement the bitwise stuff in this book?
Suppose x is two’s complement. For illustration, let‘s use an eight-bit width. (C always shifts in a wider width, at least that of int.)
If x is 64 (010000002) and we shift it right one bit, we get 32 (001000002). When we repeat that, we get 16 (000100002), 8 (000010002), 4 (000001002), 2 (000000102) , and 1 (000000012).
When x is −64, two’s complement represents it with 110000002. If we shift that right and bring in a 0 bit, the bits are 011000002, which represents 64+32 = 96. So −64 becomes 96. Now, that might be fine if one were just working with bits. But, if we want to use bit-shifting to divide numbers, we want −32, not 96. The bits for −32 are 111000002. So we can get that by bring in a 1 bit instead of a 0 bit. And the bits for −16 are 111100002. Then −8, −4, −2, and −1 are 111110002, 111111002, 111111102, 111111112. In each case, we bring in a 1 bit when shifting.
So, if we want to use bit-shifting for division, there is a simple rule for a one-bit shift: Shift all bits right one spot and leave the sign bit as is (0 or 1). If we are shifting more than one bit, keep copying the sign bit.
That is called an arithmetic right shift. So, we have two kinds of right shift: An arithmetic right shift that copies the sign bit, and a logical right shift that brings in 0 bits.
For unsigned types, the C standard says a right shift is logical.
For signed types, the C standard does not say whether it is arithmetic or logical. It leaves it to the implementation to define which that implementation uses (or possibly to define something else). That might be due to the history; early machines might not have provided arithmetic right-shift instructions (and may not even have used two’s complement), although it seems hard to imagine these days.
So, when you are writing code that shifts right and you want to be sure of the result you will get, you can use an unsigned type to be sure to get a logical shift.
In twos complement representation the highest order bit is reserved for the sign. The highest order bit is 0 for positive numbers and 1 for negative numbers.
If you right shift the bits of a negative number the 1 originally in the highest order bit will be shifted to the next lower order bit, eventually ending up in the lowest order bit if you keep shifting the bits.
In the code the for loop shifts the bits of x by one bit position and assigns the shifted value to x.
for (b = 0; x != 0; x >>=1 )
First b, which is used to count the number of 1 bits in the number is set to 0.
The condition to continue the loop is x != 0. When all the 1 bits are shifted out of x the only thing left will be 0 bits. x will then equal 0.
The operation performed after each iteration is to right-shift the value in x by 1.
The if condition in the body of the loop performs bit-wise AND operation on x. The bit mask has a value of 1, which is an instance of the int value with all bits 0 except the lowest order bit. That AND operation will evaluate to 1 (or true in C) if the lowest order bit is 1, and to 0 (or false in C) if the lowest order bit is 0. The count of one bits is incremented if the if condition is true.
given the following function:
int boof(int n) {
return n + ~n + 1;
}
What does this function return? I'm having trouble understanding exactly what is being passed in to it. If I called boof(10), would it convert 10 to base 2, and then do the bitwise operations on the binary number?
This was a question I had on a quiz recently, and I think the answer is supposed to be 0, but I'm not sure how to prove it.
note: I know how each bitwise operator works, I'm more confused on how the input is processed.
Thanks!
When n is an int, n + ~n will always result in an int that has all bits set.
Strictly speaking, the behavior of adding 1 to such an int will depend on the representation of signed numbers on the platform. The C standard support 3 representations for signed int:
for Two's Complement machines (the vast majority of systems in use today), the result will be 0 since an int with all bits set is -1.
on a One's Complement machine (which are pretty rare today, I believe), the result will be 1 since an int with all bits set is 0 or -0 (negative zero) or undefined behavior.
a Signed-magnitude machine (are there really any of these still in use?), an int with all bits set is a negative number with the maximum magnitude (so the actual value will depend on the size of an int). In this case adding 1 to it will result in a negative number (the exact value, again depends on the number of bits that are used to represent an int).
Note that the above ignores that it might be possible for some implementations to trap with various bit configurations that might be possible with n + ~n.
Bitwise operations will not change the underlying representation of the number to base 2 - all math on the CPU is done using binary operations regardless.
What this function does is take n and then add it to the two's complement negative representation of itself. This essentially negates the input. Anything you put in will equal 0.
Let me explain with 8 bit numbers as this is easier to visualize.
10 is represented in binary as 00001010.
Negative numbers are stored in two's complement (NOTing the number and adding 1)
So the (~n + 1) portion for 10 looks like so:
11110101 + 1 = 11110110
So if we take n + ~n+1:
00001010 + 11110110 = 0
Notice if we add these numbers together we get a left carry which will set the overflow flag, resulting in a 0. (Adding a negative and positive number together never means the overflow indicates an exception!)
See this
The CARRY and OVERFLOW flag in Binary Arithmetic
int v;
int sign; // the sign of v ;
sign = -(int)((unsigned int)((int)v) >> (sizeof(int) * CHAR_BIT - 1));
Q1: Since v in defined by type of int ,so why bother to cast it into int again? Is it related to portability?
Edit:
Q2:
sign = v >> (sizeof(int) * CHAR_BIT - 1);
this snippt isn't portable, since right shift of signed int is implementation defined, how to pad the left margin bits is up to complier.So
-(int)((unsigned int)((int)v)
do the poratable trick. Explain me why thid works please.
Isn't right shift of unsigned int alway padding 0 in the left margin bits ?
It's not strictly portable, since it is theoretically possible that int and/or unsigned int have padding bits.
In a hypothetical implementation where unsigned int has padding bits, shifting right by sizeof(int)*CHAR_BIT - 1 would produce undefined behaviour since then
sizeof(int)*CHAR_BIT - 1 >= WIDTH
But for all implementations where unsigned int has no padding bits - and as far as I know that means all existing implementations - the code
int v;
int sign; // the sign of v ;
sign = -(int)((unsigned int)((int)v) >> (sizeof(int) * CHAR_BIT - 1));
must set sign to -1 if v < 0 and to 0 if v >= 0. (Note - thanks to Sander De Dycker for pointing it out - that if int has a negative zero, that would also produce sign = 0, since -0 == 0. If the implementation supports negative zeros and the sign for a negative zero should be -1, neither this shifting, nor the comparison v < 0 would produce that, a direct inspection of the object representation would be required.)
The cast to int before the cast to unsigned int before the shift is entirely superfluous and does nothing.
It is - disregarding the hypothetical padding bits problem - portable because the conversion to unsigned integer types and the representation of unsigned integer types is prescribed by the standard.
Conversion to an unsigned integer type is reduction modulo 2^WIDTH, where WIDTH is the number of value bits in the type, so that the result lies in the range 0 to 2^WIDTH - 1 inclusive.
Since without padding bits in unsigned int the size of the range of int cannot be larger than that of unsigned int, and the standard mandates (6.2.6.2) that signed integers are represented in one of
sign and magnitude
ones' complement
two's complement
the smallest possible representable int value is -2^(WIDTH-1). So a negative int of value -k is converted to 2^WIDTH - k >= 2^(WIDTH-1) and thus has the most significant bit set.
A non-negative int value, on the other hand cannot be larger than 2^(WIDTH-1) - 1 and hence its value will be preserved by the conversion and the most significant bit will not be set.
So when the result of the conversion is shifted by WIDTH - 1 bits to the right (again, we assume no padding bits in unsigned int, hence WIDTH == sizeof(int)*CHAR_BIT), it will produce a 0 if the int value was non-negative, and a 1 if it was negative.
It should be quite portable because when you convert int to unsigned int (via a cast), you receive a value that is 2's complement bit representation of the value of the original int, with the most significant bit being the sign bit.
UPDATE: A more detailed explanation...
I'm assuming there are no padding bits in int and unsigned int and all bits in the two types are utilized to represent integer values. It's a reasonable assumption for the modern hardware. Padding bits are a thing of the past, from where we're still carrying them around in the current and recent C standards for the purpose of backward compatibility (i.e. to be able to run code on old machines).
With that assumption, if int and unsigned int have N bits in them (N = CHAR_BIT * sizeof(int)), then per the C standard we have 3 options to represent int, which is a signed type:
sign-and-magnitude representation, allowing values from -(2N-1-1) to 2N-1-1
one's complement representation, also allowing values from -(2N-1-1) to 2N-1-1
two's complement representation, allowing values from -2N-1 to 2N-1-1 or, possibly, from -(2N-1-1) to 2N-1-1
The sign-and-magnitude and one's complement representations are also a thing of the past, but let's not throw them out just yet.
When we convert int to unsigned int, the rule is that a non-negative value v (>=0) doesn't change, while a negative value v (<0) changes to the positive value of 2N+v, hence (unsigned int)-1=UINT_MAX.
Therefore, (unsigned int)v for a non-negative v will always be in the range from 0 to 2N-1-1 and the most significant bit of (unsigned int)v will be 0.
Now, for a negative v in the range from to -2N-1 to -1 (this range is a superset of the negative ranges for the three possible representations of int), (unsigned int)v will be in the range from 2N+(-2N-1) to 2N+(-1), simplifying which we arrive at the range from 2N-1 to 2N-1. Clearly, the most significant bit of this value will always be 1.
If you look carefully at all this math, you will see that the value of (unsigned)v looks exactly the same in binary as v in 2's complement representation:
...
v = -2: (unsigned)v = 2N - 2 = 111...1102
v = -1: (unsigned)v = 2N - 1 = 111...1112
v = 0: (unsigned)v = 0 = 000...0002
v = 1: (unsigned)v = 1 = 000...0012
...
So, there, the most significant bit of the value (unsigned)v is going to be 0 for v>=0 and 1 for v<0.
Now, let's get back to the sign-and-magnitude and one's complement representations. These two representations may allow two zeroes, a +0 and a -0. But arithmetic computations do not visibly distinguish between +0 and -0, it's still a 0, whether you add it, subtract it, multiply it or compare it. You, as an observer, normally wouldn't see +0 or -0 or any difference from having one or the other.
Trying to observe and distinguish +0 and -0 is generally pointless and you should not normally expect or rely on the presence of two zeroes if you want to make your code portable.
(unsigned int)v won't tell you the difference between v=+0 and v=-0, in both cases (unsigned int)v will be equivalent to 0u.
So, with this method you won't be able to tell whether internally v is a -0 or a +0, you won't extract v's sign bit this way for v=-0.
But again, you gain nothing of practical value from differentiating between the two zeroes and you don't want this differentiation in portable code.
So, with this I dare to declare the method for sign extraction presented in the question quite/very/pretty-much/etc portable in practice.
This method is an overkill, though. And (int)v in the original code is unnecessary as v is already an int.
This should be more than enough and easy to comprehend:
int sign = -(v < 0);
Nope its just excessive casting. There is no need to cast it to an int. It doesn't hurt however.
Edit: Its worth noting that it may be done like that so the type of v can be changed to something else or it may have once been another data type and after it was converted to an int the cast was never removed.
It isn't. The Standard does not define the representation of integers, and therefore it's impossible to guarantee exactly what the result of that will be portably. The only way to get the sign of an integer is to do a comparison.
Comparing two numbers x,y; if x>y I return 1, else return 0. I am trying to do this using only bitwise operators such as >> << ~ ! & | ^ +.
So far I got this:
int greaterThan(int x,int y) {
return ((~(x-y))>>31);
}
So if x-y is negative, the most significant bit would be a 1. So I negate it to get a 0 and shift it so that it returns a 0. If x-y is positive, the most significant bit would be 0. Negate it to get 1 and shift it to return 1. It doesn't seem to work. Am I doing this wrong or missing something?
Your method does not work for several reasons:
Assuming x and y are signed values, the subtraction could overflow. Not only does this result in undefined behavior according to the C standard, but on typical implementations where overflow wraps, it will give the wrong results. For instance, if x is INT_MIN and y is 1, x-y will yield INT_MAX, which does not have its high bit set.
If x and y are unsigned values, then x=UINT_MAX and y=0 is an example where you get the wrong answer. You'd have to impose a condition on the values of x and y (for instance, that neither has its high bit set) in order for this test to work.
What it comes down to is that in order to perform comparisons by testing the "high bit", you need one more bit than the number of bits in the values being compared. In assembly language on reasonably-CISC-ish architectures, the carry flag provides this extra bit, and special conditional jump instructions (as well as carry/borrow instructions) are able to read the value of the carry flag. C provides no way to access such a carry flag, however. One alternate approach might be to use a larger integer type (like long long) so that you can get an extra bit in your result. Good compilers will translate this to a read of the carry flag, rather than actually performing larger arithmetic operations.
C Standard (1999), chapter 6.5.7 Bitwise shift operators, paragraph 5:
The result of E1 >> E2 is E1 right-shifted E2 bit positions. If E1 has an unsigned type
or if E1 has a signed type and a nonnegative value, the value of the result is the integral
part of the quotient of E1 / 2E2 . If E1 has a signed type and a negative value, the
resulting value is implementation-defined.
I.e., the result for a bit-shift-right on negative numbers is implementation-defined. One possibility is that the "sign bit" gets copied into the bits to the right, which is what GCC seems to do (as it results in all bits set, -1).
#Dan, can you please clarify your question in first post, i.e. what exactly you can use and what you can't use (in your comments you mentioned that you can't use if/else etc.).
Anyway, if you want to get rid of unexpected -1, you should consider casting your value to unsigned int before shifting. This will not make your solution entirely correct though. Try to think about checking numbers' sign bit (e.g. if first bit of x is 0 and first bit of y is 1 then you can be sure that x>y).
This is a doubt regarding the representation of bits of signed integers. For example, when you want to represent -1, it is equivalent to 2's complement of (+1). So -1 is represented as 0xFFFFFFF. Now when I shift my number by 31 and print the result it is coming back as -1.
signed int a = -1;
printf(("The number is %d ",(a>>31));//this prints as -1
So can anyone please explain to me how the bits are represented for negative numbers?
Thanks.
When the top bit is zero, the number is positive. When it's 1, the number is negative.
Negative numbers shifted right keep shifting a "1" in as the topmost bit to keep the number negative. That's why you're getting that answer.
For more about two's complement, see this Stackoverflow question.
#Stobor points out that some C implementations could shift 0 into the high bit instead of 1. [Verified in Wikipedia.] In Java it's dependably an arithmetic shift.
But the output given by the questioner shows that his compiler is doing an arithmetic shift.
The C standard leaves it undefined whether the right shift of a negative (necessarily signed) integer shifts zeroes (logical shift right) or sign bits (arithmetic shift right) into the most significant bit. It is up to the implementation to choose.
Consequently, portable code ensures that it does not perform right shifts on negative numbers. Either it converts the value to the corresponding unsigned value before shifting (which is guaranteed to use a logical shift right, putting zeroes into the vacated bits), or it ensures that the value is positive, or it tolerates the variation in the output.
This is an arithmetic shift operation which preserves the sign bit and shifts the mantissa part of a signed number.
cheers
Basically there are two types of right shift. An unsigned right shift and a signed right shift. An unsigned right shift will shift the bits to the right, causing the least significant bit to be lost, and the most significant bit to be replaced with a 0. With a signed right shift, the bits are shifted to the right, causing the least significant bit be be lost, and the most significant bit to be preserved. A signed right shift divides the number by a power of two (corresponding to the number of places shifted), whereas an unsigned shift is a logical shifting operation.
The ">>" operator performs an unsigned right shift when the data type on which it operates is unsigned, and it performs a signed right shift when the data type on which it operates is signed. So, what you need to do is cast the object to an unsigned integer type before performing the bit manipulation to get the desired result.
Have a look at two's complement description. It should help.
EDIT: When the below was written, the code in the question was written as:
unsigned int a = -1;
printf(("The number is %d ",(a>>31));//this prints as -1
If unsigned int is at least 32 bits wide, then your compiler isn't really allowed to produce -1 as the output of that (with the small caveat that you should be casting the unsigned value to int before you pass it to printf).
Because a is an unsigned int, assigning -1 to it must give it the value of UINT_MAX (as the smallest non-negative value congruent to -1 modulo UINT_MAX+1). As long as unsigned int has at least 32 bits on your platform, the result of shifting that unsigned quantity right by 31 will be UINT_MAX divided by 2^31, which has to fit within int. (If unsigned int is 31 bits or shorter, it can produce whatever it likes because the result of the shift is unspecified).