Explanation of the output obtained in the following c program - c

Please tell how the main declared at the end of the program works out here and '\' use.Output s 0
#define P printf("%d\n", -1^~0);
#define M(P) int main()\
{\
P\
return 0;\
}
M(P)

After macro expansion this is equivalent to:
int main() { printf("%d\n", -1^~0); return 0; }
Then ~0 is -1 in a two's complement system so -1 ^ ~0 is -1 ^ -1 is 0 as xoring a number with itself gives 0.

Compiling with gcc and adding the -E option (i.e., stop after preprocessing and output preprocessed code), reveals what's happening:
# 1 "output.c"
# 1 "<command-line>"
# 1 "output.c"
int main() { printf("%d\n", -1^~0); return 0; }
Basically you are just printing an integer: -1^~0.
This is equivalent to -1 XOR 0xFFFFFFFF (assuming that integers are on 32 bits here), which, since the two's complement of 1 (i.e., representation of -1) is 0xFFFFFFFF, always outputs 0 (1 XOR 1 == 0).

by preprocessor your code will expands to:
int main()
{
printf("%d\n", -1^~0);
return 0;
}
~0 is 1's complement of 0 in this all bits are 1 (in most implementation).
0000 0000 0000 0000 0000 0000 0000 0000 = 0
1111 1111 1111 1111 1111 1111 1111 1111 <== ~0 , compliment each bit
So suppose if you have 32-bit int:
-1 is 2'c complement of 1 in this also all bits are one:
0000 0000 0000 0000 0000 0000 0000 0001 <== 1
1111 1111 1111 1111 1111 1111 1111 1110 <== 1's complement of 1
1111 1111 1111 1111 1111 1111 1111 1111 <== 2's complement of 1
So -1^ ~0 ouputs: 0
because in ^ is XOR- operator and 1 xor 1 = 0.
~0 == 1111 1111 1111 1111 1111 1111 1111 1111
-1 == 1111 1111 1111 1111 1111 1111 1111 1111
------------------------------------------- Bitwise XOR
~0 ^ -1 == 0000 0000 0000 0000 0000 0000 0000 0000
check here how XOR works.

Use:
gcc -E <file_name>.c
-E Preprocess only; do not compile, assemble or link
Output:
int main() { printf("%d\n", -1^~0); return 0; }

Related

Calculation of Bit wise NOT

How to calculate ~a manually? I am seeing these types of questions very often.
#include <stdio.h>
int main()
{
unsigned int a = 10;
a = ~a;
printf("%d\n", a);
}
The result of the ~ operator is the bitwise complement of its (promoted) operand
C11dr §6.5.3.3
When used with unsigned, it is sufficient to mimic ~ with exclusive-or with UINT_MAX which is the same type and value as (unsigned) -1. #EOF
unsigned int a = 10;
// a = ~a;
a ^= -1;
You could XOR it with a bitmask of all 1's.
unsigned int a = 10, mask = 0xFFFFFFFF;
a = a ^ mask;
This is assuming of course that an int is 32 bits. That's why it makes more sense to just use ~.
Just convert the number to binary form, and change '1' by '0' and '0' by '1'.
That is:
10 (decimal)
Converted to binary (32 bits as usual in an int) gives us:
0000 0000 0000 0000 0000 0000 0000 1010
Then apply the ~ operator:
1111 1111 1111 1111 1111 1111 1111 0101
Now you have a number that could be interpreted as an unsigned 32 bit number, or signed one. As you are using %d in your printf and a is an int, signed it is.
To find out the value in decimal from a signed (2-complement) number do as this:
If the most significant bit (the leftmost) is 0, then just convert back the binary number to decimal as usual.
if the most significant bit is 1 (our case here), then change '1' by '0' and '0' by '1', add '1' and convert to decimal prepending a minus sign to the result.
So it is:
1111 1111 1111 1111 1111 1111 1111 0101
^
|
Its most significant bit is 1, so first we change 0 and 1
0000 0000 0000 0000 0000 0000 0000 1010
And then, we add 1
0000 0000 0000 0000 0000 0000 0000 1010
1
---------------------------------------
0000 0000 0000 0000 0000 0000 0000 1011
Take this number and convert back to decimal prepending a minus sign to the result. The converted value is 11. With the minus sign, is -11
This function shows the binary representation of an int and swaps the 0's and 1's:
void not(unsigned int x)
{
int i;
for(i=(sizeof(int)*8)-1; i>=0; i--)
(x&(1u<<i))?putchar('0'):putchar('1');
printf("\n");
}
Source: https://en.wikipedia.org/wiki/Bitwise_operations_in_C#Right_shift_.3E.3E

Bitwise addition of opposite signs

int main(){
int a = 10, b = -2;
printf("\n %d \n",a^b);
return 0;
}
This program outputs -12. I could not understand how. Please explain.
0111 1110 -> 2's complement of -2
0000 1010 -> 10
---------
0111 0100
This no seems to be greater than -12 and is +ve. But how did I get the o/p as -12 ?
To find the two's complement of a negative integer, first find the binary representation of its magnitude. Then flip all its bits, i.e., apply the bitwise NOT operator !. Then add 1 to it. Therefore, we have
2 --> 0000 0000 0000 0010
~2 --> 1111 1111 1111 1101 // flip all the bits
~2 + 1 --> 1111 1111 1111 1110 // add 1
Therefore, the binary representation of -2 in two's complement is
1111 1111 1111 1110
Now, assuming the size of int is 4, the representation of a and b in two's complement is -
a --> 0000 0000 0000 1010 --> 10
b --> 1111 1111 1111 1110 --> -2
a^b --> 1111 1111 1111 0100 --> -12
The operator ^ is the bitwise XOR, or exclusive OR operator. If operates on the corresponding bits of a and b and evaluates to 1 only when the bits are not both 0 or both 1, else it evaluate to 0.
Seems legit!
1111 1110 (-2)
xor
0000 1010 (10)
=
1111 0100 (-12)
^ is the bitwise XOR, not power
a = 10 = 0000 1010
b = -2 = 1111 1110
──────────────────
a^b = 1111 0100 = -12
(int) -2 = 0xfffffffe
(int) 10 = 0x0000000a
0xfffffffe ^ 0x0000000a = fffffff4 = (int) -12

bitwise operations in c explanation

I have the following code in c:
unsigned int a = 60; /* 60 = 0011 1100 */
int c = 0;
c = ~a; /*-61 = 1100 0011 */
printf("c = ~a = %d\n", c );
c = a << 2; /* 240 = 1111 0000 */
printf("c = a << 2 = %d\n", c );
The first output is -61 while the second one is 240. Why the first printf computes the two's complement of 1100 0011 while the second one just converts 1111 0000 to its decimal equivalent?
You have assumed that an int is only 8 bits wide. This is probably not the case on your system, which is likely to use 16 or 32 bits for int.
In the first example, all the bits are inverted. This is actually a straight inversion, not two's complement:
1111 1111 1111 1111 1111 1111 1100 0011 (32-bit)
1111 1111 1100 0011 (16-bit)
In the second example, when you shift it left by 2, the highest-order bit is still zero. You have misled yourself by depicting the numbers as 8 bits in your comments.
0000 0000 0000 0000 0000 0000 1111 0000 (32-bit)
0000 0000 1111 0000 (16-bit)
Try to avoid doing bitwise operations with signed integers -- often it'll lead you into undefined behavior.
The situation here is that you're taking unsigned values and assigning them to a signed variable. For ~60 this is undefined behavior. You see it as -61 because the bit pattern ~60 is also the two's-complement representation of -61. On the other hand 60 << 2 comes out correct because 240 has the same representation both as a signed and unsigned integer.

Encoding A Decimal Value Into a Fixed Number of Bits

There is a heated, ongoing disagreement between myself and someone more senior that I need to resolve. Thus, I turn to you internets. Don't fail me now!
The objective is to take a decimal value and encode it into 24 bits. It's a simple linear scale so that 0x000000 is the min value and 0xFFFFFF is the max value.
We both agree on the basic formula of how to achieve this: (max-min)/range. The issue is the denominator. The other party says that this should be 1 << 24 (one left shifted 24 bits). This yields 16777216. I argue (and have seen this done previously) that the denominator should be 0xFFFFFF, or 16777215.
Who is correct?
The denominator should definitely be 16777215 as you described. 2^24 is 16777216 but that number cannot be represented with a 24 bit number. The max number is 2^24 - 1 (16777215) or 0xFFFFFF like you say.
I'd second #Tejolote's answer, since shifting a 1 0 or more times will give you a range between 1..1677216.
(32-bit number)
0000 0000 0000 0000 0000 0000 0001 // (1 << 0)
0001 0000 0000 0000 0000 0000 0000 // (1 << 24)
If you were to get a bitmask of those 24 bits, you would get a range from 1 to 0 (probably not what you intended):
(mask to a 24-bit number)
0000 0000 0000 0000 0000 0000 0001 // (1 << 0)
& 0000 1111 1111 1111 1111 1111 1111 // mask
==================================
0000 0000 0000 0000 0000 0000 0001 // result of '1', correct
and
0001 0000 0000 0000 0000 0000 0000 // (1 << 24)
& 0000 1111 1111 1111 1111 1111 1111 // mask
==================================
0000 0000 0000 0000 0000 0000 0000 // result of '0', wrong
What you want instead is a range from 0 to 16777215:
& 0000 0000 0000 0000 0000 0000 0000 // (1 << 0) - 1
0000 1111 1111 1111 1111 1111 1111 // mask
==================================
0000 0000 0000 0000 0000 0000 0000 // result of '0', correct
and
0000 1111 1111 1111 1111 1111 1111 // (1 << 24) - 1
& 0000 1111 1111 1111 1111 1111 1111 // mask
==================================
0000 1111 1111 1111 1111 1111 1111 // result of '16777215', correct
OP "So let's say that I'm encoding speed for a car. 0.0 mph would be 0x000000 and 150.0mph would be represented by 0xFFFFFF. It's a simple linear scale from there."
Yes 16777215 = 0xFFFFFF - 0x000000
0.0 --> 0x000000
150.0 --> 0xFFFFFF
y = dy/dx(x - x0) + y0 = (0xFFFFFF - 0x000000)/(150.0 - 0.0)*(x - 0.0) + 0x000000
But if senior was thinking the decimal value on the upper end represented the speed one could approach, but not attain.
0.0 --> 0x000000
150.0 --> 0xFFFFFF + 1
16777216 = 0xFFFFFF + 1 - 0x000000
I'd recommend buying your senior a brew. Learn from them - they cheat

Type conversion: signed int to unsigned long in C

I'm currently up to chapter 2 in The C Programming Language (K&R) and reading about bitwise operations.
This is the example that sparked my curiosity:
x = x & ~077
Assuming a 16-bit word length and 32-bit long type, what I think would happen is 077 would first be converted to:
0000 0000 0011 1111 (16 bit signed int).
This would then be complemented to:
1111 1111 1100 0000.
My question is what would happen next for the different possible types of x? If x is a signed int the answer is trivial. But, if x is a signed long I'm assuming ~077 would become:
1111 1111 1111 1111 1111 1111 1100 0000
following 2s complement to preserve the sign. Is this correct?
Also, if x is an unsigned long will ~077 become:
0000 0000 0000 0000 1111 1111 1100 0000
Or, will ~077 be converted to a signed long first:
1111 1111 1111 1111 1111 1111 1100 0000
...after which it is converted to an unsigned long (no change to bits)?
Any help would help me clarify whether or not this operation will always set only the last 6 bits to zero.
Whatever data-type you choose, ~077 will set the rightmost 6 bits to 0 and all others to 1.
Assuming 16-bit ints and 32-bit longs, there are 4 cases:
Case 1
unsigned int x = 077; // x = 0000 0000 0011 1111
x = ~x; // x = 1111 1111 1100 0000
unsigned long y = ~x; // y = 0000 0000 0000 0000 1111 1111 1100 0000
Case 2
unsigned int x = 077; // x = 0000 0000 0011 1111
x = ~x; // x = 1111 1111 1100 0000
long y = ~x; // y = 0000 0000 0000 0000 1111 1111 1100 0000
Case 3
int x = 077; // x = 0000 0000 0011 1111
x = ~x; // x = 1111 1111 1100 0000
unsigned long y = ~x; // y = 1111 1111 1111 1111 1111 1111 1100 0000
Case 4
int x = 077; // x = 0000 0000 0011 1111
x = ~x; // x = 1111 1111 1100 0000
long y = ~x; // y = 1111 1111 1111 1111 1111 1111 1100 0000
See code here. This means the sign extension is done when the source is signed. When the source is unsigned, sign bit is not extended and the left bits are set to 0.
x = x & ~077 //~077=11111111111111111111111111000000(not in every case)
~077 is a constant evaluated at the complie time so its value will be casted according to the value of x at the compile time so the AND operation will always yield to last 6 bits of x to 0 and the remaining bits will remain whatever they were before the AND operation. Like
//let x=256472--> Binary--> 0000 0000 0000 0011 1110 1001 1101 1000
x = x & ~077;
// now x = 0000 0000 0000 0011 1110 1001 1100 0000 Decimal--> 256448
So the last 6 bits are changed to 0 irrespective of the data type during the compile time remaining bits remain same. And in knr it is written there The portable form
involves no extra cost, since ~077 is a constant expression that can be evaluated at compile time.

Resources