why the input argument on population count must be unsigned - c

int pcount_r (unsigned x) {
if(x==0)
return 0;
else
return ((x & 1) + pcount_r(x >> 1));
}
just wondering why the input argument is unsigned.
best regards!

It is implementation-defined what E1 >> E2 produces when E1 has a signed type and negative value (C99 6.5.7:5). On the other-hand, E1 >> E2 is unambiguously defined by the standard. Accepting and operating on an unsigned integer is a way to make the function most portable.
Incidentally, it is usual to use unsigned types for bit-twiddling.

If the number is signed, then right-shifting will copy the sign-bit (the last bit), effectively giving negative numbers an infinite number of bits.
int pcount_r(int x) {
if (x == 0)
return 0;
else if (x < 0)
return sizeof(int)*8 - pcount_r(~x);
else
return (x & 1) + pcount_r(x >> 1);
}

The problem is that C (unlike Java) does not support signed (arithmetic) shifts. CPUs have two different types of shift operators, signed and unsigned. For example, on an x86, the SAR instruction does an arithmetic shift right, and SHR does an unsigned shift right. Since, C only has one shift right operator (>>), it cannot support both of them. If the compiler implements the code above using an unsigned shift (SHR) and you supply a negative number to that procedure you will get a wrong answer.

Related

arithmetic right shift shifts in 0s when MSB is 1

As an exercise I have to write the following function:
multiply x by 2, saturating to Tmin / Tmax if overflow, using only bit-wise and bit-shift operations.
Now this is my code:
// xor MSB and 2nd MSB. if diferent, we have an overflow and SHOULD get 0xFFFFFFFF. otherwise we get 0.
int overflowmask = ((x & 0x80000000) ^ ((x & 0x40000000)<<1)) >>31;
// ^ this arithmetic bit shift seems to be wrong
// this gets you Tmin if x < 0 or Tmax if x >= 0
int overflowreplace = ((x>>31)^0x7FFFFFFF);
// if overflow, return x*2, otherwise overflowreplace
return ((x<<1) & ~overflowmask)|(overflowreplace & overflowmask);
now when overflowmask should be 0xFFFFFFFF, it is 1 instead, which means that the arithmetic bit shift >>31 shifted in 0s instead of 1s (MSB got XORed to 1, then shifted to the bottom).
x is signed and the MSB is 1, so according to C99 an arithmetic right shift should fill in 1s. What am I missing?
EDIT: I just guessed that this code isn't correct. To detect an overflow it suffices for the 2nd MSB to be 1.
However, I still wonder why the bit shift filled in 0s.
EDIT:
Example: x = 0xA0000000
x & 0x80000000 = 0x80000000
x & 0x40000000 = 0
XOR => 0x80000000
>>31 => 0x00000001
EDIT:
Solution:
int msb = x & 0x80000000;
int msb2 = (x & 0x40000000) <<1;
int overflowmask = (msb2 | (msb^msb2)) >>31;
int overflowreplace = (x >>31) ^ 0x7FFFFFFF;
return ((x<<1) & ~overflowmask) | (overflowreplace & overflowmask);
Even on twos-complement machines, the behaviour of right-shift (>>) on negative operands is implementation-defined.
A safer approach is to work with unsigned types and explicitly OR-in the MSB.
While you're at it, you probably also want to use fixed-width types (e.g. uint32_t) rather than failing on platforms that don't meet your expectations.
0x80000000 is treated as an unsigned number which causes everything to be converted to unsigned, You can do this:
// xor MSB and 2nd MSB. if diferent, we have an overflow and SHOULD get 0xFFFFFFFF. otherwise we get 0.
int overflowmask = ((x & (0x40000000 << 1)) ^ ((x & 0x40000000)<<1)) >>31;
// this gets you Tmin if x < 0 or Tmax if x >= 0
int overflowreplace = ((x>>31)^0x7FFFFFFF);
// if overflow, return x*2, otherwise overflowreplace
return ((x<<1) & ~overflowmask)|(overflowreplace & overflowmask);
OR write the constants in negative decimals
OR I would store all the constants in const int variables to have them guaranteed signed.
Never use bit-wise operands on signed types. In case of right shift on signed integers, it is up to the compiler if you get an arithmetic or a logical shift.
That's only one of your problems though. When you use a hex integer constant 0x80000000, it is actually of type unsigned int as explained here. This accidentally turns your whole expression (x & 0x80000000) ^ ... into unsigned type because of the integer promotion rule known as "the usual arithmetic conversions". Whereas the 0x40000000 expression is signed int and works as (the specific compiler) expected.
Solution:
All variables involved must be of type uint32_t.
All hex constants involved must be u suffixed.
To get something arithmetic shift portably, you would have to do
(x >> n) | (0xFFFFFFFFu << (32-n)) or some similar hack.

Cast Integer to Float using Bit Manipulation breaks on some integers in C

Working on a class assignment, I'm trying to cast an integer to a float only using bit manipulations (limited to any integer/unsigned operations incl. ||, &&. also if, while). My code is working for most values, but some values are not generating the results I'm looking for.
For example, if x is 0x807fffff, I get 0xceff0001, but the correct result should be 0xceff0000. I think I'm missing something with my mantissa and rounding, but can't quite pin it down. I've looked at some other threads on SO as well converting-int-to-float and how-to-manually
unsigned dl22(int x) {
int tmin = 0x1 << 31;
int tmax = ~tmin;
unsigned signBit = 0;
unsigned exponent;
unsigned mantissa;
int bias = 127;
if (x == 0) {
return 0;
}
if (x == tmin) {
return 0xcf << 24;
}
if (x < 0) {
signBit = x & tmin;
x = (~x + 1);
}
exponent = bias + 31;
while ( ( x & tmin) == 0 ) {
exponent--;
x <<= 1;
}
exponent <<= 23;
int mantissaMask = ~(tmin >> 8);
mantissa = (x >> 8) & mantissaMask;
return (signBit | exponent | mantissa);
}
EDIT/UPDATE
Found a viable solution - see below
Your code produces the expected output for me on the example you presented. As discussed in comments, however, from C's perspective it does exhibit undefined behavior -- not just in the computation of tmin, but also, for the same reason, in the loop wherein you compute the exponent. To whatever extent this code produces results that vary from environment to environment, that will follow either from the undefined behavior or from your assumption about the size of [unsigned] int being incorrect for the C implementation in use.
Nevertheless, if we assume (unsafely)
that shifts of ints operate as if the left operand were reinterpreted as an unsigned int with the same bit pattern, operated upon, and the resulting bit pattern reinterpreted as an int, and
that int and unsigned int are at least 32 bits wide,
then your code seems correct, modulo rounding.
In the event that the absolute value of the input int has more than 24 significant binary digits (i.e. it is at least 224), however, some precision will be lost in the conversion. In that case the correct result will depend on the FP rounding mode you intend to implement. An incorrectly rounded result will be off by 1 unit in the last place; how many results that affects depends on the rounding mode.
Simply truncating / shifting off the extra bits as you do yields round toward zero mode. That's one of the standard rounding modes, but not the default. The default rounding mode is to round to the nearest representable number, with ties being resolved in favor of the result having least-significant bit 0 (round to even); there are also three other standard modes. To implement any mode other than round-toward-zero, you'll need to capture the 8 least-significant bits of the significand after scaling and before shifting them off. These, together with other details depending on the chosen rounding mode, will determine how to apply the correct rounding.
About half of the 32-bit two's complement numbers will be rounded differently when converted in round-to-zero mode than when converted in any one of the other modes; which numbers exhibit a discrepancy depends on which rounding mode you consider.
I didn't originally mention that I am trying to imitate a U2F union statement:
float u2f(unsigned u) {
union {
unsigned u;
float f;
} a;
a.u = u;
return a.f;
}
Thanks to guidance provided in the postieee-754-bit-manipulation-rounding-error I was able to manage the rounding issues by putting the following after my while statement. This clarified the rounding that was occurring.
lsb = (x >> 8) & 1;
roundBit = (x >> 7) & 1;
stickyBitFlag = !!(x & 0x7F);
exponent <<= 23;
int mantissaMask = ~(tmin >> 8);
mantissa = (x >> 8);
mantissa &= mantissaMask;
roundBit = (roundBit & stickyBitFlag) | (roundBit & lsb);
return (signBit | exponent | mantissa) + roundBit;

Bit-shift not applying to a variable declaration-assignment one-liner

I'm seeing strange behavior when I try to apply a right bit-shift within a variable declaration/assignment:
unsigned int i = ~0 >> 1;
The result I'm getting is 0xffffffff, as if the >> 1 simply wasn't there. It seems to be something about the ~0, because if I instead do:
unsigned int i = 0xffffffff >> 1;
I get 0x7fffffff as expected. I thought I might be tripping over an operator precedence issue, so tried:
unsigned int i = (~0) >> 1;
but it made no difference. I could just perform the shift in a separate statement, like
unsigned int i = ~0;
i >>= 1;
but I'd like to know what's going on.
update Thanks merlin2011 for pointing me towards an answer. Turns out it was performing an arithmetic shift because it was interpreting ~0 as a signed (negative) value. The simplest fix seems to be:
unsigned int i = ~0u >> 1;
Now I'm wondering why 0xffffffff wasn't also interpreted as a signed value.
It is how c compiler works for signed value. The base literal for number in C is int (in 32-bit machine, it is 32-bit signed int)
You may want to change it to:
unsigned int i = ~(unsigned int)0 >> 1;
The reason is because for the signed value, the compiler would treat the operator >> as an arithmetic shift (or signed shift).
Or, more shortly (pointed out by M.M),
unsigned int i = ~0u >> 1;
Test:
printf("%x", i);
Result:
In unsigned int i = ~0;, ~0 is seen as a signed integer (the compiler should warn about that).
Try this instead:
unsigned int i = (unsigned int)~0 >> 1;

How to sign extend a 9-bit value when converting from an 8-bit value?

I'm implementing a relative branching function in my simple VM.
Basically, I'm given an 8-bit relative value. I then shift this left by 1 bit to make it a 9-bit value. So, for instance, if you were to say "branch +127" this would really mean, 127 instructions, and thus would add 256 to the IP.
My current code looks like this:
uint8_t argument = 0xFF; //-1 or whatever
int16_t difference = argument << 1;
*ip += difference; //ip is a uint16_t
I don't believe difference will ever be detected as a less than 0 with this however. I'm rusty on how signed to unsigned works. Beyond that, I'm not sure the difference would be correctly be subtracted from IP in the case argument is say -1 or -2 or something.
Basically, I'm wanting something that would satisfy these "tests"
//case 1
argument = -5
difference -> -10
ip = 20 -> 10 //ip starts at 20, but becomes 10 after applying difference
//case 2
argument = 127 (must fit in a byte)
difference -> 254
ip = 20 -> 274
Hopefully that makes it a bit more clear.
Anyway, how would I do this cheaply? I saw one "solution" to a similar problem, but it involved division. I'm working with slow embedded processors (assumed to be without efficient ways to multiply and divide), so that's a pretty big thing I'd like to avoid.
To clarify: you worry that left shifting a negative 8 bit number will make it appear like a positive nine bit number? Just pad the top 9 bits with the sign bit of the initial number before left shift:
diff = 0xFF;
int16 diff16=(diff + (diff & 0x80)*0x01FE) << 1;
Now your diff16 is signed 2*diff
As was pointed out by Richard J Ross III, you can avoid the multiplication (if that's expensive on your platform) with a conditional branch:
int16 diff16 = (diff + ((diff & 0x80)?0xFF00:0))<<1;
If you are worried about things staying in range and such ("undefined behavior"), you can do
int16 diff16 = diff;
diff16 = (diff16 | ((diff16 & 0x80)?0x7F00:0))<<1;
At no point does this produce numbers that are going out of range.
The cleanest solution, though, seems to be "cast and shift":
diff16 = (signed char)diff; // recognizes and preserves the sign of diff
diff16 = (short int)((unsigned short)diff16)<<1; // left shift, preserving sign
This produces the expected result, because the compiler automatically takes care of the sign bit (so no need for the mask) in the first line; and in the second line, it does a left shift on an unsigned int (for which overflow is well defined per the standard); the final cast back to short int ensures that the number is correctly interpreted as negative. I believe that in this form the construct is never "undefined".
All of my quotes come from the C standard, section 6.3.1.3. Unsigned to signed is well defined when the value is within range of the signed type:
1 When a value with integer type is converted to another integer type
other than _Bool, if the value can be represented by the new type, it
is unchanged.
Signed to unsigned is well defined:
2 Otherwise, if the new type is unsigned, the value is converted by
repeatedly adding or subtracting one more than the maximum value that
can be represented in the new type until the value is in the range of
the new type.
Unsigned to signed, when the value lies out of range isn't too well defined:
3 Otherwise, the new type is signed and the value cannot be
represented in it; either the result is implementation-defined or an
implementation-defined signal is raised.
Unfortunately, your question lies in the realm of point 3. C doesn't guarantee any implicit mechanism to convert out-of-range values, so you'll need to explicitly provide one. The first step is to decide which representation you intend to use: Ones' complement, two's complement or sign and magnitude
The representation you use will affect the translation algorithm you use. In the example below, I'll use two's complement: If the sign bit is 1 and the value bits are all 0, this corresponds to your lowest value. Your lowest value is another choice you must make: In the case of two's complement, it'd make sense to use either of INT16_MIN (-32768) or INT8_MIN (-128). In the case of the other two, it'd make sense to use INT16_MIN - 1 or INT8_MIN - 1 due to the presense of negative zeros, which should probably be translated to be indistinguishable from regular zeros. In this example, I'll use INT8_MIN, since it makes sense that (uint8_t) -1 should translate to -1 as an int16_t.
Separate the sign bit from the value bits. The value should be the absolute value, except in the case of a two's complement minimum value when sign will be 1 and the value will be 0. Of course, the sign bit can be where-ever you like it to be, though it's conventional for it to rest at the far left hand side. Hence, shifting right 7 places obtains the conventional "sign" bit:
uint8_t sign = input >> 7;
uint8_t value = input & (UINT8_MAX >> 1);
int16_t result;
If the sign bit is 1, we'll call this a negative number and add to INT8_MIN to construct the sign so we don't end up in the same conundrum we started with, or worse: undefined behaviour (which is the fate of one of the other answers).
if (sign == 1) {
result = INT8_MIN + value;
}
else {
result = value;
}
This can be shortened to:
int16_t result = (input >> 7) ? INT8_MIN + (input & (UINT8_MAX >> 1)) : input;
... or, better yet:
int16_t result = input <= INT8_MAX ? input
: INT8_MIN + (int8_t)(input % (uint8_t) INT8_MIN);
The sign test now involves checking if it's in the positive range. If it is, the value remains unchanged. Otherwise, we use addition and modulo to produce the correct negative value. This is fairly consistent with the C standard's language above. It works well for two's complement, because int16_t and int8_t are guaranteed to use a two's complement representation internally. However, types like int aren't required to use a two's complement representation internally. When converting unsigned int to int for example, there needs to be another check, so that we're treating values less than or equal to INT_MAX as positive, and values greater than or equal to (unsigned int) INT_MIN as negative. Any other values need to be handled as errors; In this case I treat them as zeros.
/* Generate some random input */
srand(time(NULL));
unsigned int input = rand();
for (unsigned int x = UINT_MAX / ((unsigned int) RAND_MAX + 1); x > 1; x--) {
input *= (unsigned int) RAND_MAX + 1;
input += rand();
}
int result = /* Handle positives: */ input <= INT_MAX ? input
: /* Handle negatives: */ input >= (unsigned int) INT_MIN ? INT_MIN + (int)(input % (unsigned int) INT_MIN)
: /* Handle errors: */ 0;
If the offset is in the 2's complement representation, then
convert this
uint8_t argument = 0xFF; //-1
int16_t difference = argument << 1;
*ip += difference;
into this:
uint8_t argument = 0xFF; //-1
int8_t signed_argument;
signed_argument = argument; // this relies on implementation-defined
// conversion of unsigned to signed, usually it's
// just a bit-wise copy on 2's complement systems
// OR
// memcpy(&signed_argument, &argument, sizeof argument);
*ip += signed_argument + signed_argument;

How can I check if a signed integer is positive?

Using bitwise operators and I suppose addition and subtraction, how can I check if a signed integer is positive (specifically, not negative and not zero)? I'm sure the answer to this is very simple, but it's just not coming to me.
If you really want an "is strictly positive" predicate for int n without using conditionals (assuming 2's complement):
-n will have the sign (top) bit set if n was strictly positive, and clear in all other cases except n == INT_MIN;
~n will have the sign bit set if n was strictly positive, or 0, and clear in all other cases including n == INT_MIN;
...so -n & ~n will have the sign bit set if n was strictly positive, and clear in all other cases.
Apply an unsigned shift to turn this into a 0 / 1 answer:
int strictly_positive = (unsigned)(-n & ~n) >> ((sizeof(int) * CHAR_BIT) - 1);
EDIT: as caf points out in the comments, -n causes an overflow when n == INT_MIN (still assuming 2's complement). The C standard allows the program to fail in this case (for example, you can enable traps for signed overflow using GCC with the-ftrapv option). Casting n to unsigned fixes the problem (unsigned arithmetic does not cause overflows). So an improvement would be:
unsigned u = (unsigned)n;
int strictly_positive = (-u & ~u) >> ((sizeof(int) * CHAR_BIT) - 1);
Check the most significant bit. 0 is positive, 1 is negative.
If you can't use the obvious comparison operators, then you have to work harder:
int i = anyValue;
if (i && !(i & (1U << (sizeof(int) * CHAR_BIT - 1))))
/* I'm almost positive it is positive */
The first term checks that the value is not zero; the second checks that the value does not have the leading bit set. That should work for 2's-complement, 1's-complement or sign-magnitude integers.
Consider how the signedness is represented. Often it's done with two's-complement or with a simple sign bit - I think both of these could be checked with a simple logical and.
Check that is not 0 and the most significant bit is 0, something like:
int positive(int x) {
return x && (x & 0x80000000);
}

Resources