Get mosts significant nibble, regardless of int bit length and endianness - c

My head is starting to hurt... I've been looking at this way too long.
I'm trying to mask the most significant nibble of an int, regardless of the int bit length and the endianness of the machine. Let's say x = 8425 = 0010 0000 1110 1001 = 0x20E9. I know that to get the least significant nibble, 9, I just need to do something like x & 0xF to get back 9. But how about the most significant nibble, 2?
I apologize if my logic from here on out falls apart, my brain is completely fried, but here I go:
My book tells me that the bit length w of the data type int can be computed with w = sizeof(int)<<3. If I knew that the machine were big-endian, I could do 0xF << w-4 to have 1111 for the most significant nibble and 0000 for the rest, i.e. 1111 0000 0000 0000. If I knew that the machine were little-endian, I could do 0xF >> w-8 to have 0000 0000 0000 1111. Fortunately, this works even though we are told to assume that right shifts are done arithmetically just because 0xF always gives me the first bit of 0000. But this is not a proper solution. We are not allowed to test for endianness and then proceed from there, so what do I do?

Bit shifting operators operate at a level of abstraction above endianness. "Left" shifts always shift towards the most significant bit, and "right" shifts always shift towards the least significant bit.

You should be able to right shift by the (number of bits) - 4 regardless of endianness.
Since you already know how to compute the number of bits, it should suffice to just subtract 4 and shift by that number, and then (for safety), mask with 0xF.
See this question for discussion about endianness.

Q But how about the most significant nibble, 2?
A (x >> (sizeof(int)*8-4)) & 0xF

Related

Right bit-shift giving wrong result, can someone explain

I'm right-shifting -109 by 5 bits, and I expect -3, because
-109 = -1101101 (binary)
shift right by 5 bits
-1101101 >>5 = -11 (binary) = -3
But, I am getting -4 instead.
Could someone explain what's wrong?
Code I used:
int16_t a = -109;
int16_t b = a >> 5;
printf("%d %d\n", a,b);
I used GCC on linux, and clang on osx, same result.
The thing is you are not considering negative numbers representation correctly. With right shifting, the type of shift (arithmetic or logical) depends on the type of the value being shifted. If you cast your value to an unsigned value, you might get what you are expecting:
int16_t b = ((unsigned int)a) >> 5;
You are using -109 (16 bits) in your example. 109 in bits is:
00000000 01101101
If you take's 109 2's complement you get:
11111111 10010011
Then, you are right shifting by 5 the number 11111111 10010011:
__int16_t a = -109;
__int16_t b = a >> 5; // arithmetic shifting
__int16_t c = ((__uint16_t)a) >> 5; // logical shifting
printf("%d %d %d\n", a,b,c);
Will yield:
-109 -4 2044
The result of right shifting a negative value is implementation defined behavior, from the C99 draft standard section 6.5.7 Bitwise shift operators paragraph 5 which says (emphasis mine going forward):
The result of E1 >> E2 is E1 right-shifted E2 bit positions. If E1 has an unsigned type
or if E1 has a signed type and a nonnegative value, the value of the result is the integral
part of the quotient of E1 / 2E2. If E1 has a signed type and a negative value, the
resulting value is implementation-defined.
If we look at gcc C Implementation-defined behavior documents under the Integers section it says:
The results of some bitwise operations on signed integers (C90 6.3, C99 and C11 6.5).
Bitwise operators act on the representation of the value including both the sign and value bits, where the sign bit is considered immediately above the highest-value value bit. Signed ‘>>’ acts on negative numbers by sign extension.
That's pretty clear what's happening, when representing signed integers, negative integers have a property which is, sign extension, and the left most significant bit is the sign bit.
So, 1000 ... 0000 (32 bit) is the biggest negative number that you can represent, with 32 bits.
Because of this, when you have a negative number and you shift right, a thing called sign extension happens, which means that the left most significant bit is extended, in simple terms it means that, for a number like -109 this is what happens:
Before shifting you have (16bit):
1111 1111 1001 0011
Then you shift 5 bits right (after the pipe are the discarded bits):
1XXX X111 1111 1100 | 1 0011
The X's are the new spaces that appear in your integer bit representation, that due to the sign extension, are filled with 1's, which give you:
1111 1111 1111 1100 | 1 0011
So by shifting: -109 >> 5, you get -4 (1111 .... 1100) and not -3.
Confirming results with the 1's complement:
+3 = 0... 0000 0011
-3 = ~(0... 0000 0011) + 1 = 1... 1111 1100 + 1 = 1... 1111 1101
+4 = 0... 0000 0100
-4 = ~(0... 0000 0100) + 1 = 1... 1111 1011 + 1 = 1... 1111 1100
Note: Remember that the 1's complement is just like the 2's complement with the diference that you first must negate the bits of positive number and only then sum +1.
Pablo's answer is essentially correct, but there are two small bits (no pun intended!) that may help you see what's going on.
C (like pretty much every other language) uses what's called two's complement, which is simply a different way of representing negative numbers (it's used to avoid the problems that come up with other ways of handling negative numbers in binary with a fixed number of digits). There is a conversion process to turn a positive number in two's complement (which looks just like any other number in binary - except that the furthest most left bit must be 0 in a positive number; it's basically the sign place-holder) is reasonably simple computationally:
Take your number
00000000 01101101 (It has 0s padding it to the left because it's 16 bits. If it was long, it'd be padded with more zeros, etc.)
Flip the bits
11111111 10010010
Add one.
11111111 10010011.
This is the two's complement number that Pablo was referring to. It's how C holds -109, bitwise.
When you logically shift it to the right by five bits you would APPEAR to get
00000111 11111100.
This number is most definitely not -4. (It doesn't have a 1 in the first bit, so it's not negative, and it's way too large to be 4 in magnitude.) Why is C giving you negative 4 then?
The reason is basically that the ISO implementation for C doesn't specify how a given compiler needs to treat bit-shifting in negative numbers. GCC does what's called sign extension: the idea is to basically pad the left bits with 1s (if the initial number was negative before shifting), or 0s (if the initial number was positive before shifting).
So instead of the 5 zeros that happened in the above bit-shift, you instead get:
11111111 11111100. That number is in fact negative 4! (Which is what you were consistently getting as a result.)
To see that that is in fact -4, you can just convert it back to a positive number using the two's complement method again:
00000000 00000011 (bits flipped)
00000000 00000100 (add one).
That's four alright, so your original number (11111111 11111100) was -4.

Left shift of unsigned 32-bit integer in assembly

Quick question on left shifts in assembly using the "sall" instruction.
From what I understand, "sall rightop, leftop" would translate to "leftop = leftop << rightop", so taking an integer and shifting the bits 4 spaces to the left would result in a multiplication by 2^4.
But what happens when the integer is unsigned, 32-bits, and is something like:
1111 1111 1111 1111 1111 0000 0010 0010
Would a left shift in this case become 1111 1111 1111 1111 0000 0010 0010 0000 ?
Obviously this is not a multiplication by 2^4.
Thanks!!
It is a multiplication by 2^4, modulo 2^32:
n = (n * 2^4) % (2 ^ 32)
You can detect the bits that got "shifted out" by performing a shift left followed by masking, in this case
dropped = (n >> (32-4)) & (1<<4-1)
Left shifts (SAL, SHL) simply lose the bits on the left. The bits on the right get filled with 0. If any of the lost bits is 1, you have an overflow and the wrong result in terms of multiplication. You use S*L for both non-negative and negative values.
The regular right shift (SHR) works in exactly the same manner, but the direction is reverse, the bits on the left get filled with 0 and you lose the bits on the right. The result is effectively rounded/truncated towards 0. You use SHR of non-negative values because it does not preserve the sign of the value (0 gets written into it).
The arithmetic shift right (SAR) is slightly different. The most significant bit (=leftmost bit, sign bit) doesn't get filled with 0. It gets filled with its previous value, thus preserving the sign of the value. Another notable difference is that if the value is negative, the lost bits on the right result in rounding towards minus infinity instead of 0.

Finding how many bits it takes to represent a 2's complement using only bitwise functions

We can assume an int is 32 bits in 2's compliment
The only Legal operators are: ! ~ & ^ | + << >>
At this point i am using brute force
int a=0x01;
x=(x+1)>>1; //(have tried with just x instead of x+1 as well)
a = a+(!(!x));
...
with the last 2 statements repeated 32 times. This adds 1 to a everytime x is shifted one place and != 0 for all 32 bits
Using the test compiler it says my method fails on test case 0x7FFFFFFF (a 0 followed by 31 1's) and says this number requires 32 bits to represent. I dont see why this isnt 31 (which my method computes) Can anyone explain why? And what i need to change to account for this?
0x7FFFFFFF does require 32 bits. It could be expressed as an unsigned integer in only 31 bits:
111 1111 1111 1111 1111 1111 1111 1111
but if we interpret that as a signed integer using two's complement, then the leading 1 would indicate that it's negative. So we have to prepend a leading 0:
0 111 1111 1111 1111 1111 1111 1111 1111
which then makes it 32 bits.
As for what you need to change — your current program actually has undefined behavior. If 0x7FFFFFFF (231-1) is the maximum allowed integer value, then 0x7FFFFFFF + 1 cannot be computed. It is likely to result in -232, but there's absolutely no guarantee: the standard allow compilers to do absolutely anything in this case, and real-world compilers do in fact perform optimizations that can happen to give shocking results when you violate this requirement. Similarly, there's no specific guarantee what ... >> 1 will mean if ... is negative, though in this case compilers are required, at least, to choose a specific behavior and document it. (Most compilers choose to produce another negative number by copying the leftmost 1 bit, but there's no guarantee of that.)
So really the only sure fix is either:
to rewrite your code as a whole, using an algorithm that doesn't have these problems; or
to specifically check for the case that x is 0x7FFFFFFF (returning a hardcoded 32) and the case that x is negative (replacing it with ~x, i.e. -(x+1), and proceeding as usual).
Please try this code to check whether a signed integer x can be fitted into n bits. The function returns 1 when it does and 0 otherwise.
// http://www.cs.northwestern.edu/~wms128/bits.c
int check_bits_fit_in_2s_complement(signed int x, unsigned int n) {
int mask = x >> 31;
return !(((~x & mask) + (x & ~mask))>> (n + ~0));
}

Bitwise Shift Clarification

Assume I have the variable x initialized to 425. In binary, that is 110101001.
Bitshifting it to the right by 2 as follows: int a = x >> 2;, the answer is: 106. In binary that is 1101010. This makes sense as the two right-most bits are dropped and two zero's are added to the left side.
Bitshifting it to the left by 2 as follows: int a = x << 2;, the answer is: 1700. In binary this is 11010100100. I don't understand how this works. Why are the two left most bits preserved? How can I drop them?
Thank you,
This is because int is probably 32-bits on your system. (Assuming x is type int.)
So your 425, is actually:
0000 0000 0000 0000 0000 0001 1010 1001
When left-shifted by 2, you get:
0000 0000 0000 0000 0000 0110 1010 0100
Nothing gets shifted off until you go all the way past 32. (Strictly speaking, overflow of signed-integer is undefined behavior in C/C++.)
To drop the bits that are shifted off, you need to bitwise AND against a mask that's the original length of your number:
int a = (425 << 2) & 0x1ff; // 0x1ff is for 9 bits as the original length of the number.
First off, don't shift signed integers. The bitwise operations are only universally unambiguous for unsigned integral types.
Second, why shift if you can use * 4 and / 4?
Third, you only drop bits on the left when you exceed the size of the type. If you want to "truncate on the left" mathematically, perform a modulo operation:
(x * 4) % 256
The bitwise equivalent is AND with a bit pattern: (x << 2) & 0xFF
(That is, the fundamental unsigned integral types in C are always implicitly "modulo 2n", where n is the number of bits of the type.)
Why would you expect them to be dropped? Your int (probably) consumes 4 bytes. You're shifting them into a space that it rightfully occupies.
The entire 4-byte space in memory is embraced during evaluation. You'd need to shift entirely out of that space in memory to "drop" them.

Representation of negative numbers in C?

How does C represent negative integers?
Is it by two's complement representation or by using the MSB (most significant bit)?
-1 in hexadecimal is ffffffff.
So please clarify this for me.
ISO C (C99 section 6.2.6.2/2 in this case but it carries forward to later iterations of the standard(a)) states that an implementation must choose one of three different representations for integral data types, two's complement, ones' complement or sign/magnitude (although it's incredibly likely that the two's complement implementations far outweigh the others).
In all those representations, positive numbers are identical, the only difference being the negative numbers.
To get the negative representation for a positive number, you:
invert all bits then add one for two's complement.
invert all bits for ones' complement.
invert just the sign bit for sign/magnitude.
You can see this in the table below:
number | two's complement | ones' complement | sign/magnitude
=======|=====================|=====================|====================
5 | 0000 0000 0000 0101 | 0000 0000 0000 0101 | 0000 0000 0000 0101
-5 | 1111 1111 1111 1011 | 1111 1111 1111 1010 | 1000 0000 0000 0101
Keep in mind that ISO doesn't mandate that all bits are used in the representation. They introduce the concept of a sign bit, value bits and padding bits. Now I've never actually seen an implementation with padding bits but, from the C99 rationale document, they have this explanation:
Suppose a machine uses a pair of 16-bit shorts (each with its own sign bit) to make up a 32-bit int and the sign bit of the lower short is ignored when used in this 32-bit int. Then, as a 32-bit signed int, there is a padding bit (in the middle of the 32 bits) that is ignored in determining the value of the 32-bit signed int. But, if this 32-bit item is treated as a 32-bit unsigned int, then that padding bit is visible to the user’s program. The C committee was told that there is a machine that works this way, and that is one reason that padding bits were added to C99.
I believe that machine they may have been referring to was the Datacraft 6024 (and it's successors from Harris Corp). In those machines, you had a 24-bit word used for the signed integer but, if you wanted the wider type, it strung two of them together as a 47-bit value with the sign bit of one of the words ignored:
+---------+-----------+--------+-----------+
| sign(1) | value(23) | pad(1) | value(23) |
+---------+-----------+--------+-----------+
\____________________/ \___________________/
upper word lower word
(a) Interestingly, given the scarcity of modern implementations that actually use the other two methods, there's been a push to have two's complement accepted as the one true method. This has gone quite a long way in the C++ standard (WG21 is the workgroup responsible for this) and is now apparently being considered for C as well (by WG14).
C allows sign/magnitude, one's complement and two's complement representations of signed integers. Most typical hardware uses two's complement for integers and sign/magnitude for floating point (and yet another possibility -- a "bias" representation for the floating point exponent).
-1 in hexadecimal is ffffffff. So please clarify me in this regard.
In two's complement (by far the most commonly used representation), each bit except the most significant bit (MSB), from right to left (increasing order of magnitude) has a value 2n where n increases from zero by one. The MSB has the value -2n.
So for example in an 8bit twos-complement integer, the MSB has the place value -27 (-128), so the binary number: 1111 11112 is equal to -128 + 0111 11112 = -128 + 127 = -1
One useful feature of two's complement is that a processor's ALU only requires an adder block to perform subtraction, by forming the two's complement of the right-hand operand. For example 10 - 6 is equivalent to 10 + (-6); in 8bit binary (for simplicity of explanation) this looks like:
0000 1010
+1111 1010
---------
[1]0000 0100 = 4 (decimal)
Where the [1] is the discarded carry bit. Another example; 10 - 11 == 10 + (-11):
0000 1010
+1111 0101
---------
1111 1111 = -1 (decimal)
Another feature of two's complement is that it has a single value representing zero, whereas sign-magnitude and one's complement each have two; +0 and -0.
For integral types it's usually two's complement (implementation specific). For floating point, there's a sign bit.

Resources