I found the following in some old and bad documented C code:
#define addr (((((147 << 8) | 87) << 8) | 117) << 8) | 107
What is it? Well I know it's an IP address - and shifting 8 bits to the left makes some sense too. But can anyone explain this to me as a whole? What is happening there?
Thank you!
The code
(((((147 << 8) | 87) << 8) | 117) << 8) | 107
generates 4 bytes containing the IP 147.87.117.107.
The first step is the innermost bracket:
147<<8
147 = 1001 0011
1001 0011 << 8 = 1001 0011 0000 0000
The second byte 87 is inserted by bitwise-or operation on (147<<8). As you can see, the 8 bits on the right are all 0 (due to <<8), so the bitwise-or operation just inserts the 8 bits from 87:
1001 0011 0000 0000 (147<<8)
0000 0000 0101 0111 (87)
------------------- bitwise-or
1001 0011 0101 0111 (147<<8)|87
The same is done with rest so you have 4 bytes at the end saved into a single 32-bit integer.
An IPv4 address consists of four bytes, which means it can be stored in a 32-bit integer. This is taking the four parts of the IP address (147.87.117.107) and using bit-shifting and the bit-wise OR operator to "encode" the address in a single 4-byte quantity.
(Note: the address might be 107.117.87.147 - I can't remember offhand what order the bytes are stored in.)
The (hex) bytes of the resulting quantity look like:
aabb ccdd
Where aa is the hex representation of 147 (0x93), bb is 87 (0x57), cc is 117 (0x75), and dd is 107 (0x6b), so the resulting value is 9357756b.
Update: None of this applies to IPv6, since an IPv6 address is 128 bits instead of 32.
Related
I'm trying to understand what is this condition mean.
Does it mean after shifting the value it will be equal to 1?
I mean does it mean --> if (c >> a is 1)
Note: c >> a & 1 same as (c >> a) & 1.
Bitwise AND operate on bits, so the possibilities are :
1101 & 0001 => 0001
0001 & 0001 => 0001
1010 & 0001 => 0000
0000 & 0001 => 0000
Now, on C, anything that's not a zero is treated as true, so the statement means "if after shifting the least significant bit is 1", or perhaps "if after shifting the value is odd" if you're dealing with odd-even operation.
It executes the following statement or block if bit a of value c is true.
a+1 a a-1 1 0
... --+---+---+---+-- ... -+---+---+
| z | y | x | | q | p |
... --+---+---+---+-- ... -+---+---+
... -+---+---+
>> a | z | y |
... -+---+---+
... -+---+---+
&& 1 | 0 | y |
... -+---+---+
>> has higher operator precedence than &.
So c >> a & 1 means "shift the value c by a bits to the right, then check if the lowest bit of the shifted value is set. To single out certain bit values like this is known as bit masking and 1 in this case is the mask.
Closed. This question is not reproducible or was caused by typos. It is not currently accepting answers.
This question was caused by a typo or a problem that can no longer be reproduced. While similar questions may be on-topic here, this one was resolved in a way less likely to help future readers.
Closed 5 years ago.
Improve this question
#include <stdio.h>
#include <stdint.h>
#include <inttypes.h>
#define PTXSHIFT 12
#define PTX(va) (((uint) (va) >> PTXSHIFT) & 0x3FF)
int main()
{
printf("0x%x", PTX(0x12345678));
return 0;
}
I tested it on online compiler and I'm getting a compiler error saying 'uint' is undeclared. I guess C online compiler can't import stdint.h. : https://www.onlinegdb.com/online_c_compiler
So I manually put values: (0x12345678>>12)&0x3FF.
Problem the output is 0x345, can you explain why?
0x12345678 >> 12 = 0x12345 (??)
0x12345 & 0x3FF = 0x345 (??)
UPDATE Sorry for the confusion guys.
I'm asking for the explanation on the output 0x345. I'm confused why 0x12345678 >> 12 is 0x12345 and 0x12345 & 0x3FF is 0x345.
What output did you expect?
Let's look the bitwise AND, nibble by nibble, in hex:
1 2 3 4 5
AND 3 f f
--------------
3 4 5
or in binary, which might help:
0011 0100 0101
AND 0011 1111 1111
------------------
0011 0100 0101
It should be obvious that 3 & 3 is 3, just as 4 & f is 4 and so on.
You can get the same result using
#define PTX(va) (((uint32_t) (va) >> PTXSHIFT) & 0x3FF)
The answer is expected because of the Bitwise and operation as mentioned in other answer.
1 2 3 4 5
0001 0010 0011 0100 0101
0000 0000 0011 1111 1111
0 0 3 F F
AND
--------------------------
0000 0000 0011 0100 0101
The shift operation is shifting the bits of an unsigned int to the right filling the left with 0's.
0x12345678
0001 0010 0011 0100 0101 0110 0111 1000
Shifts by 12
0001 0010 0011 0100 0101 [0110 0111 1000]---->
[0000 0000 0000]0001 0010 0011 0100 0101
0 0 0 1 2 3 4 5
That explains why 0x12345678 >> 12 = 0x12345
Truth table of AND
And if you don't understand the AND operation then it will be the truth table it should know about.
A | B | A & B
--+---+------
0 | 0 | 0
0 | 1 | 0
1 | 0 | 0
1 | 1 | 1
I'm learning C programming and this is my problem. I feel like I've learned the macro topic in C but I guess I'm not quite ready yet.
#define PGSIZE 4096
#define CONVERT(sz) (((sz)+PGSIZE-1) & ~(PGSIZE-1))
printf("0x%x", CONVERT(0x123456));
Here is the problem. My expected output is 0x100000000000 but it prints 0x124000.
((sz)+PGSIZE-1) = (0x123456)+4096-1
= (0x123456)+(0x1000 0000 0000) - 1 //4096 is 2^12
= 0x1000 0012 3456 - 1
= 0x1000 0012 3455
~(PGSIZE-1) => ~(0x0111 1111 1111) = 0x1000 0000 0000
((sz)+PGSIZE-1) & ~(PGSIZE-1) = (0x1000 0012 3455) & (0x1000 0000 0000)
= 0x100000000000
But when I ran the program, it prints 0x124000.
What am I doing wrong?
You showed in the question:
((sz)+PGSIZE-1) => (0x123456)+4096-1
=(0x123456)+(0x1000 0000 0000) - 1 //4096 is 2^12
=0x1000 0012 3456 - 1
You converted 4096 to a binary notation, but then treat it as a hexadecimal number. That won't work. If you want to keep the hexadecimal notation, that's:
((sz)+PGSIZE-1) => (0x123456)+4096-1
=(0x123456)+(0x1000) - 1
=0x124456 - 1
Or converting both to binary, that's:
((sz)+PGSIZE-1) => (0x123456)+4096-1
=(0b1_0010_0011_0100_0101_0110)+(0b1_0000_0000_0000) - 1
= 0b1_0010_0100_0100_0101_0110 - 1
The error is in your calculation.
2^12 is not 1000 0000 0000, but 0001 0000 0000 0000.
The weights of binary begin as 2^0 which is one so 2^12 comes at 13th position so 4096 is 0x1000
If you use this for your manual calculation you will get 0x124000 as your answer.
The below calculation also answers your doubt "how 0x124455 & 1000 becomes 0x124000? Does if automatically fill 1s to the front? Could you explain little more about it on the question?" in the comment of the previous answer.
4096 = 0x1000
4096-1 => 0xfff => 0x0000 0fff
~(4096-1) is thus 0xfffff000
Coming to the addition part in macro
(0x123456)+4096-1
=>0x123456+0x1000-1
=>0x124456-1
=>0x124455
You result will be 0x124455 & 0xfffff000 which is 0x124000 which is the correct output
I want to ask about C operator from this code. My friends ask it, but I never seen this operator:
binfo_out.biSizeImage = ( ( ( (binfo_out.biWidth * binfo_out.biBitCount) + 31) & ~31) / 8) * abs(out_bi.biHeight);
What this operator & ~31 mean? anybody can explain this?
The & operator is a bitwise AND. The ~ operator is a bitwise NOT (i.e. inverts the bits). As 31 is binary 11111, ~31 is binary 1111111....111100000 (i.e. a number which is all ones, but has five zeroes at the end). Anding a number with this thus clears the least significant five bits, which (if you think about it) rounds down to a multiple of 32.
What does the whole thing do? Note it adds 31 first. This has the effect that the whole thing rounds something UP to the next multiple of 32.
This might be used to calculate (for instance), how many bits are going to be used to store something if you can only use 32 bit quantities to store them, as there is going to be some wastage in the last 32 bit number.
31 in binary representation will be 11111 so ~31 = 5 zeros 00000 preceeded by 1's. so it is to make last 5 bits zero. i.e. to mask the last 5 bits.
here ~ is NOT operator i.e. it gives 1's complement. and & is AND operator.
& is the bitwise AND operator. It and's every corresponding bit of two operands on its both sides. In an example, it does the following:
Let char be a type of 8 bits.
unsigned char a = 5;
unsigned char b = 12;
Their bit representation would be as follows:
a --> 0 0 0 0 0 1 0 1 // 5
b --> 0 0 0 0 1 1 0 0 // 12
And the bitwise AND of those would be:
a & b --> 0 0 0 0 0 1 0 0 // 8
Now, the ~ is the bitwise NOT operator, and it negates every single bit of the operand it prefixes. In an example, it does the following:
With the same a from the previous example, the ~a would be:
~a --> 1 1 1 1 1 0 1 0 // 250
Now with all this knowledge, x & ~31 would be the bitwise AND of x and ~31, where the bit representation of ~31 looks like this:
~31 --> 1111 1111 1111 1111 1111 1111 1110 0000 // -32 on my end
So the result would be whatever the x has on its bits, other than its last 5 bits.
& ~31
means bitwise and of the operand on the left of & and a bitwise not of 31.
http://en.wikipedia.org/wiki/Bitwise_operation
The number 31 in binary is 11111 and ~ in this case is the unare one's compliment operator. So assuming 4-byte int:
~31 = 11111111 11111111 11111111 11100000
The & is the bitwise AND operator. So you're taking the value of:
((out_bi.biWidth * out_bi.biBitCount) + 31)
And performing a bitwise AND with the above value, which is essentially blanking the 5 low-order bits of the left-hand result.
I can't get the 2-complement calculation to work.
I know C compiles ~b that would invert all bits to -6 if b=5. But why?
int b=101, inverting all bits is 010 then for 2 complement's notation I just add 1 but that becomes 011 i.e. 3 which is wrong answer.
How should I calculate with bit inversion operator ~?
Actually, here's how 5 is usually represented in memory (16-bit integer):
0000 0000 0000 0101
When you invert 5, you flip all the bits to get:
1111 1111 1111 1010
That is actually -6 in decimal form. I think in your question, you were simply flipping the last three bits only, when in fact you have to consider all the bits that comprise the integer.
The problem with b = 101 (5) is that you have chosen one too few binary digits.
binary | decimal
~101 = 010 | ~5 = 2
~101 + 1 = 011 | ~5 + 1 = 3
If you choose 4 bits, you'll get the expected result:
binary | decimal
~0101 = 1010 | ~5 = -6
~0101 + 1 = 1011 | ~5 + 1 = -5
With only 3 bits you can encode integers from -4 to +3 in 2's complement representation.
With 4 bits you can encode integers from -8 to +7 in 2's complement representation.
-6 was getting truncated to 2 and -5 was getting truncated to 3 in 3 bits. You needed at least 4 bits.
And as others have already pointed out, ~ simply inverts all bits in a value, so, ~~17 = 17.
~b is not a 2-complement operation. It is a bitwise NOT operation. It just inverts every bit in a number, therefore ~b is unequal to -b.
Examples:
b = 5
binary representation of b: 0000 0000 0000 0101
binary representation of ~b: 1111 1111 1111 1010
~b = -6
b = 17
binary representation of b: 0000 0000 0001 0001
binary representation of ~b: 1111 1111 1110 1110
~b = -18
binary representation of ~(~b): 0000 0000 0001 0001
~(~b) = 17
~ simply inverts all the bits of a number:
~(~a)=17 if a=17
~0...010001 = 1...101110 ( = -18 )
~1...101110 = 0...010001 ( = 17 )
You need to add 1 only in case you want to negate a number (to get a 2-s complement) i.e. get -17 out of 17.
~b + 1 = -b
So:
~(~b) equals ~(-b - 1) equals -(-b - 1) -1 equals b
In fact, ~ reverse all bits, and if you do ~ again, it will reverse back.
I can't get the 2-completement calculation to work.
I know C compiles ~b that whould invert all bits to -6 if b=5. But why?
Because you are using two's complement. Do you know what two's complement is?
Lets say that we have a byte variable (signed char). Such a variable can have the values from 0 to 127 or from -128 to 0.
Binary, it works like this:
0000 0000 // 0
...
0111 1111 // 127
1000 0000 // -128
1000 0001 // -127
...
1111 1111 // -1
Signed numbers are often described with a circle.
If you understand the above, then you understand why ~1 equals -2 and so on.
Had you used one's complement, then ~1 would have been -1, because one's complement uses a signed zero. For a byte, described with one's complement, values would go from 0 to 127 to -127 to -0 back to 0.
you declared b as an integer. That means the value of b will be stored in 32 bits and the complement (~) will take place on the 32 bit word and not the last 3 bits as you are doing.
int b=5 // b in binary: 0000 0000 0000 0101
~b // ~b in binary: 1111 1111 1111 1010 = -6 in decimal
The most significant bit stores the sign of the integer (1:negetive 0:positive) so 1111 1111 1111 1010 is -6 in decimal.
Similarly:
b=17 // 17 in binary 0000 0000 0001 0001
~b // = 1111 1111 1110 1110 = -18