The macro prints incorrect output - c

I'm learning C programming and this is my problem. I feel like I've learned the macro topic in C but I guess I'm not quite ready yet.
#define PGSIZE 4096
#define CONVERT(sz) (((sz)+PGSIZE-1) & ~(PGSIZE-1))
printf("0x%x", CONVERT(0x123456));
Here is the problem. My expected output is 0x100000000000 but it prints 0x124000.
((sz)+PGSIZE-1) = (0x123456)+4096-1
= (0x123456)+(0x1000 0000 0000) - 1 //4096 is 2^12
= 0x1000 0012 3456 - 1
= 0x1000 0012 3455
~(PGSIZE-1) => ~(0x0111 1111 1111) = 0x1000 0000 0000
((sz)+PGSIZE-1) & ~(PGSIZE-1) = (0x1000 0012 3455) & (0x1000 0000 0000)
= 0x100000000000
But when I ran the program, it prints 0x124000.
What am I doing wrong?

You showed in the question:
((sz)+PGSIZE-1) => (0x123456)+4096-1
=(0x123456)+(0x1000 0000 0000) - 1 //4096 is 2^12
=0x1000 0012 3456 - 1
You converted 4096 to a binary notation, but then treat it as a hexadecimal number. That won't work. If you want to keep the hexadecimal notation, that's:
((sz)+PGSIZE-1) => (0x123456)+4096-1
=(0x123456)+(0x1000) - 1
=0x124456 - 1
Or converting both to binary, that's:
((sz)+PGSIZE-1) => (0x123456)+4096-1
=(0b1_0010_0011_0100_0101_0110)+(0b1_0000_0000_0000) - 1
= 0b1_0010_0100_0100_0101_0110 - 1

The error is in your calculation.
2^12 is not 1000 0000 0000, but 0001 0000 0000 0000.
The weights of binary begin as 2^0 which is one so 2^12 comes at 13th position so 4096 is 0x1000
If you use this for your manual calculation you will get 0x124000 as your answer.
The below calculation also answers your doubt "how 0x124455 & 1000 becomes 0x124000? Does if automatically fill 1s to the front? Could you explain little more about it on the question?" in the comment of the previous answer.
4096 = 0x1000
4096-1 => 0xfff => 0x0000 0fff
~(4096-1) is thus 0xfffff000
Coming to the addition part in macro
(0x123456)+4096-1
=>0x123456+0x1000-1
=>0x124456-1
=>0x124455
You result will be 0x124455 & 0xfffff000 which is 0x124000 which is the correct output

Related

unexpected value on bit manipulation with macro in C [closed]

Closed. This question is not reproducible or was caused by typos. It is not currently accepting answers.
This question was caused by a typo or a problem that can no longer be reproduced. While similar questions may be on-topic here, this one was resolved in a way less likely to help future readers.
Closed 5 years ago.
Improve this question
#include <stdio.h>
#include <stdint.h>
#include <inttypes.h>
#define PTXSHIFT 12
#define PTX(va) (((uint) (va) >> PTXSHIFT) & 0x3FF)
int main()
{
printf("0x%x", PTX(0x12345678));
return 0;
}
I tested it on online compiler and I'm getting a compiler error saying 'uint' is undeclared. I guess C online compiler can't import stdint.h. : https://www.onlinegdb.com/online_c_compiler
So I manually put values: (0x12345678>>12)&0x3FF.
Problem the output is 0x345, can you explain why?
0x12345678 >> 12 = 0x12345 (??)
0x12345 & 0x3FF = 0x345 (??)
UPDATE Sorry for the confusion guys.
I'm asking for the explanation on the output 0x345. I'm confused why 0x12345678 >> 12 is 0x12345 and 0x12345 & 0x3FF is 0x345.
What output did you expect?
Let's look the bitwise AND, nibble by nibble, in hex:
1 2 3 4 5
AND 3 f f
--------------
3 4 5
or in binary, which might help:
0011 0100 0101
AND 0011 1111 1111
------------------
0011 0100 0101
It should be obvious that 3 & 3 is 3, just as 4 & f is 4 and so on.
You can get the same result using
#define PTX(va) (((uint32_t) (va) >> PTXSHIFT) & 0x3FF)
The answer is expected because of the Bitwise and operation as mentioned in other answer.
1 2 3 4 5
0001 0010 0011 0100 0101
0000 0000 0011 1111 1111
0 0 3 F F
AND
--------------------------
0000 0000 0011 0100 0101
The shift operation is shifting the bits of an unsigned int to the right filling the left with 0's.
0x12345678
0001 0010 0011 0100 0101 0110 0111 1000
Shifts by 12
0001 0010 0011 0100 0101 [0110 0111 1000]---->
[0000 0000 0000]0001 0010 0011 0100 0101
0 0 0 1 2 3 4 5
That explains why 0x12345678 >> 12 = 0x12345
Truth table of AND
And if you don't understand the AND operation then it will be the truth table it should know about.
A | B | A & B
--+---+------
0 | 0 | 0
0 | 1 | 0
1 | 0 | 0
1 | 1 | 1

How to find magic bitboards?

const int BitTable[64] = {
63, 30, 3, 32, 25, 41, 22, 33, 15, 50, 42, 13, 11, 53, 19, 34, 61, 29, 2,
51, 21, 43, 45, 10, 18, 47, 1, 54, 9, 57, 0, 35, 62, 31, 40, 4, 49, 5, 52,
26, 60, 6, 23, 44, 46, 27, 56, 16, 7, 39, 48, 24, 59, 14, 12, 55, 38, 28,
58, 20, 37, 17, 36, 8
};
int pop_1st_bit(uint64 *bb) {
uint64 b = *bb ^ (*bb - 1);
unsigned int fold = (unsigned) ((b & 0xffffffff) ^ (b >> 32));
*bb &= (*bb - 1);
return BitTable[(fold * 0x783a9b23) >> 26];
}
uint64 index_to_uint64(int index, int bits, uint64 m) {
int i, j;
uint64 result = 0ULL;
for(i = 0; i < bits; i++) {
j = pop_1st_bit(&m);
if(index & (1 << i)) result |= (1ULL << j);
}
return result;
}
It's from the Chess Programming Wiki:
https://www.chessprogramming.org/Looking_for_Magics
It's part of some code for finding magic numbers.
The argument uint64 m is a bitboard representing the possible blocked squares for either a rook or bishop move. Example for a rook on the e4 square:
0 0 0 0 0 0 0 0
0 0 0 0 1 0 0 0
0 0 0 0 1 0 0 0
0 0 0 0 1 0 0 0
0 1 1 1 0 1 1 0
0 0 0 0 1 0 0 0
0 0 0 0 1 0 0 0
0 0 0 0 0 0 0 0
The edge squares are zero because they always block, and reducing the number of bits needed is apparently helpful.
/* Bitboard, LSB to MSB, a1 through h8:
* 56 - - - - - - 63
* - - - - - - - -
* - - - - - - - -
* - - - - - - - -
* - - - - - - - -
* - - - - - - - -
* - - - - - - - -
* 0 - - - - - - 7
*/
So in the example above, index_to_uint64 takes an index (0 to 2^bits), and the number of bits set in the bitboard (10), and the bitboard.
It then pops_1st_bit for each number of bits, followed by another shifty bit of code. pops_1st_bit XORs the bitboard with itself minus one (why?). Then it ANDs it with a full 32-bits, and somewhere around here my brain runs out of RAM. Somehow the magical hex number 0x783a9b23 is involved (is that the number sequence from Lost?). And there is this ridiculous mystery array of randomly ordered numbers from 0-63 (BitTable[64]).
Alright, I have it figured out.
First, some terminology:
blocker mask: A bitboard containing all squares that can block a piece, for a given piece type and the square the piece is on. It excludes terminating edge squares because they always block.
blocker board: A bitboard containing occupied squares. It only has squares which are also in the blocker mask.
move board: A bitboard containing all squares a piece can move to, given a piece type, a square, and a blocker board. It includes terminating edge squares if the piece can move there.
Example for a rook on the e4 square, and there are some random pieces on e2, e5, e7, b4, and c4.
The blocker mask A blocker board The move board
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0
0 1 1 1 0 1 1 0 0 1 1 0 0 0 0 0 0 0 1 1 0 1 1 1
0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0
0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
Some things to note:
The blocker mask is always the same for a given square and piece type (either rook or bishop).
Blocker boards include friendly & enemy pieces, and it is a subset of the blocker mask.
The resulting move board may include moves that capture your own pieces, however these moves are easily removed afterward: moveboard &= ~friendly_pieces)
The goal of the magic numbers method is to very quickly look up a pre-calculated move board for a given blocker board. Otherwise you'd have to (slowly) calculate the move board every time. This only applies to sliding pieces, namely the rook and bishop. The queen is just a combination of the rook and bishop.
Magic numbers can be found for each square & piece type combo. To do this, you have to calculate every possible blocker board variation for each square/piece combo. This is what the code in question is doing. How it's doing it is still a bit of a mystery to me, but that also seems to be the case for the apparent original author, Matt Taylor. (Thanks to #Pradhan for the link)
So what I've done is re-implemented the code for generating all possible blocker board variations. It uses a different technique, and while it's a little slower, it's much easier to read and comprehend. The fact that it's slightly slower is not a problem, because this code isn't speed critical. The program only has to do it once at program startup, and it only takes microseconds on a dual-core i5.
/* Generate a unique blocker board, given an index (0..2^bits) and the blocker mask
* for the piece/square. Each index will give a unique blocker board. */
static uint64_t gen_blockerboard (int index, uint64_t blockermask)
{
/* Start with a blockerboard identical to the mask. */
uint64_t blockerboard = blockermask;
/* Loop through the blockermask to find the indices of all set bits. */
int8_t bitindex = 0;
for (int8_t i=0; i<64; i++) {
/* Check if the i'th bit is set in the mask (and thus a potential blocker). */
if ( blockermask & (1ULL<<i) ) {
/* Clear the i'th bit in the blockerboard if it's clear in the index at bitindex. */
if ( !(index & (1<<bitindex)) ) {
blockerboard &= ~(1ULL<<i); //Clear the bit.
}
/* Increment the bit index in the 0-4096 index, so each bit in index will correspond
* to each set bit in blockermask. */
bitindex++;
}
}
return blockerboard;
}
To use it, do something like this:
int bits = count_bits( RookBlockermask[square] );
/* Generate all (2^bits) blocker boards. */
for (int i=0; i < (1<<bits); i++) {
RookBlockerboard[square][i] = gen_blockerboard( i, RookBlockermask[square] );
}
How it works: There are 2^bits blocker boards, where bits is the number of 1's in the blocker mask, which are the only relevant bits. Also, each integer from 0 to 2^bits has a unique sequence of 1's and 0's of length bits. So this function just corresponds each bit in the given integer to a relevant bit in the blocker mask, and turns it off/on accordingly to generate a unique blocker board.
It's not as clever or fast, but it's readable.
Alright, I'm going to try to step through this.
index_to_uint64( 7, 10, m );
7 is just a randomly chosen number between 0 and 2^10, and 10 is the number of bits set in m. m can be represented in four ways:
bitboard:
0 0 0 0 0 0 0 0
0 0 0 0 1 0 0 0
0 0 0 0 1 0 0 0
0 0 0 0 1 0 0 0
0 1 1 1 0 1 1 0
0 0 0 0 1 0 0 0
0 0 0 0 1 0 0 0
0 0 0 0 0 0 0 0
dec: 4521262379438080
hex: 0x1010106e101000
bin: 0000 0000 0001 0000 0001 0000 0001 0000 0110 1110 0001 0000 0001 0000 0000 0000
Moving on. This will be called 10 times. It has a return value and it modifies m.
pop_1st_bit(&m);
In pop_1st_bit, m is referred to by bb. I'll change it to m for clarity.
uint64 b = m^(m-1);
The m-1 part takes the least significant bit that is set and flips it and all the bits below it. After the XOR, all those changed bits are now set to 1 while all the higher bits are set to 0.
m : 0000 0000 0001 0000 0001 0000 0001 0000 0110 1110 0001 0000 0001 0000 0000 0000
m-1: 0000 0000 0001 0000 0001 0000 0001 0000 0110 1110 0001 0000 0000 1111 1111 1111
b : 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0001 1111 1111 1111
Next:
unsigned int fold = (unsigned) ((b & 0xffffffff) ^ (b >> 32));
The (b & 0xffffffff) part ANDs b with lower 32 set bits. So this essentially clears any bits in the upper half of b.
(b & 0xffffffff)
b: 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0001 1111 1111 1111
&: 0000 0000 0000 0000 0000 0000 0000 0000 1111 1111 1111 1111 1111 1111 1111 1111
=: 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0001 1111 1111 1111
The ... ^ (b >> 32) part shifts the upper half of b into the lower half, then XORs it with the result of the previous operation. So it basically XORs the top half of b with the lower half of b. This has no effect in this case because the upper half of b was empty to begin with.
>> :0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000
^ :0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0001 1111 1111 1111
uint fold = 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0001 1111 1111 1111
I don't understand the point of that "folding", even if there had been bits set in the upper half of b.
Anyways, moving on. This next line actually modifies m by unsetting the lowest bit. That makes some sense.
m &= (m - 1);
m : 0000 0000 0001 0000 0001 0000 0001 0000 0110 1110 0001 0000 0001 0000 0000 0000
m-1: 0000 0000 0001 0000 0001 0000 0001 0000 0110 1110 0001 0000 0000 1111 1111 1111
& : 0000 0000 0001 0000 0001 0000 0001 0000 0110 1110 0001 0000 0000 0000 0000 0000
This next part multiplies fold by some hex number (a prime?), right shifts the product 26, and uses that as an index into BitTable, our mysterious array of randomly ordered numbers 0-63. At this point I suspect the author might be writing a pseudo random number generator.
return BitTable[(fold * 0x783a9b23) >> 26];
That concludes pop_1st_bit. That's all done 10 times (once for each bit originally set in m). Each of the 10 calls to pop_1st_bit returns a number 0-63.
j = pop_1st_bit(&m);
if(index & (1 << i)) result |= (1ULL << j);
In the above two lines, i is the current bit we are on, 0-9. So if the index number (the 7 originally passed as an argument to index_to_uint64) has the i'th bit set, then set the j'th bit in the result, where j was the 0-63 return value from pop_1st_bit.
And that's it! I'm still confused :(
When watching a video series on chess engines on youtube I had exactly the same questions as paulwal222. There seems to be some high level mathematics involved. The best links explaining the background of this difficult subject are https://chessprogramming.wikispaces.com/Matt+Taylor and https://chessprogramming.wikispaces.com/BitScan . It seems that Matt Taylor in 2003 in a google.group ( https://groups.google.com/forum/#!topic/comp.lang.asm.x86/3pVGzQGb1ys ) (also found by pradhan) came up with something that is now called Matt Taylor's folding trick, a 32-bit friendly implementation to find the bit-index of LS1B ( https://en.wikipedia.org/wiki/Find_first_set ). Taylor's folding trick apparently is an adaptation of the De Bruijn ( https://en.wikipedia.org/wiki/Nicolaas_Govert_de_Bruijn ) bitscan, devised in 1997, according to Donald Knuth by Martin Läuter to determine the LS1B index by minimal perfect hashing ( https://en.wikipedia.org/wiki/Perfect_hash_function ). The numbers of the BitTable (63, 30, ..) and the fold in PopBit (0x783a9b23) are probably the so called magic numbers (uniquely?) related to Matt Taylor's 32-bit folding trick. This folding trick seems to be very fast, because lots of engines have copied this approach (f.i Stockfish).
The goal is to find and return the position of the least significant bit (LSB) in a given integer and then set the LSB to zero, therefore if m = 1010 0100, we want the function to return the index 2 and to modify m as: m = 1010 0000. In order to see what is going on in this process, it is easiest to look at the 8-bit case and what happens in the first 2 lines of pop_1st_bit:
m m^(m-1) "fold"
XXXX XXX1 0000 0001 0001
XXXX XX10 0000 0011 0011
XXXX X100 0000 0111 0111
XXXX 1000 0000 1111 1111
XXX1 0000 0001 1111 1110
XX10 0000 0011 1111 1100
X100 0000 0111 1111 1000
1000 0000 1111 1111 0000
In the above, X means we don't care about the value of the bits in those positions. So, m^(m-1) maps every 8-bit number to one of eight 8-bit keys based on the position of its LSB. The fold operation then converts each of these 8-bit keys to a unique 4-bit key. The reason for doing this in the original (64-bit) case is to avoid a 64-bit multiplication in fold*magic by replacing it with a 32-bit multiplication.
The table for the 8-bit case contains eight values (one for each of the possible LSB positions in an 8-bit integer). Thus, we need 3 bits to index the table. The right shift is going to give us those 3 bits, the formula for calculating how many bits to shift by:
[Number of bits in key] - log2([size of table])
So, in the original case, we get: 32 - log2(64) = 26, whereas in the 8-bit case we get 4 - log2(8) = 1.
Therefore: table_index = fold*magic >> 1, will give us numbers between 0-7 which we need to index the table. All we need to do now is find magic that gives us a different value for each of our eight 4-bit keys.
In this case, the 4-bit magic number that we need is 5 (0b0101). You can find the number by brute force, by looking for magic such that fold*magic >> 1 yields a unique value for each of the eight "fold" keys. (It is worth noting here that I am assuming that any overflowing bits from the 4-bit multiplication of fold and magic disappear):
m m^(m-1) "fold" fold*5 fold*5 >> 1
XXXX XXX1 0000 0001 0001 0101 010 (dec: 2)
XXXX XX10 0000 0011 0011 1111 111 (dec: 7)
XXXX X100 0000 0111 0111 0011 001 (dec: 1)
XXXX 1000 0000 1111 1111 1011 101 (dec: 5)
XXX1 0000 0001 1111 1110 0110 011 (dec: 3)
XX10 0000 0011 1111 1100 1100 110 (dec: 6)
X100 0000 0111 1111 1000 1000 100 (dec: 4)
1000 0000 1111 1111 0000 0000 000 (dec: 0)
So when the LSB is the 0th bit, fold*5 >> 1 will be 2, so we put zero at index 2 of the array. When the LSB is the 1st bit, fold*5 >> 1 will be 7, so we put 1 at index 7 of the array. Etc, etc.
From the results above we can build the array as {7, 2, 0, 4, 6, 3, 5, 1}. The ordering of the array looks meaningless, but really it is just related to the order in which fold*5 >> 1 spits out the index values.
As to whether or not you should actually do this, I imagine that it is hardware dependent, however, my processor can do the same thing about five times faster with:
unsigned long idx;
_BitScanForward64(&idx, bb);
bb &= bb - 1;

Multiplication of 9 with bitwise operator

Can somebody tell me how to find the table of 9 with using Bit wise operator.A detailed description will be greatly appreciated.
To multiply by 2 to the power of N (i.e. 2^N) shift the bits N times to the left
0000 0001 = 1
times 4 = (2^2 => N = 2) = 2 bit shift : 0000 0100 = 4
times 8 = (2^3 -> N = 3) = 3 bit shift : 0010 0000 = 32
etc..
visualize
Times 9 just add original value like this
0000 1001 // 9 original value
0001 0000 // 2 shift 3 to left
0000 0010 + // 2
-----------
0001 0010 = 18
0001 1000 // 3(0000 0011) shift 3 to left
0000 0011 + // 3
-----------
0001 1011 = 27
0010 0000 // 4(0000 0100) shift 3 to left
0000 0100 + // 4
-----------
0010 0100 = 36
etc..
Meaning x = (n<<3)+n
Shift-and-add multiplication
A bit shift left is like a multiplication of 2. So 2x2x2 = 8 then add the original value again (which is the same as doing +1) = 9
i.e.
((v<<1)<<1)<<1) + v == (v << 3) + v
9 is 10012, therefore shift left 3 times and add the original value.

Why is ~b=-6 if b=5?

I can't get the 2-complement calculation to work.
I know C compiles ~b that would invert all bits to -6 if b=5. But why?
int b=101, inverting all bits is 010 then for 2 complement's notation I just add 1 but that becomes 011 i.e. 3 which is wrong answer.
How should I calculate with bit inversion operator ~?
Actually, here's how 5 is usually represented in memory (16-bit integer):
0000 0000 0000 0101
When you invert 5, you flip all the bits to get:
1111 1111 1111 1010
That is actually -6 in decimal form. I think in your question, you were simply flipping the last three bits only, when in fact you have to consider all the bits that comprise the integer.
The problem with b = 101 (5) is that you have chosen one too few binary digits.
binary | decimal
~101 = 010 | ~5 = 2
~101 + 1 = 011 | ~5 + 1 = 3
If you choose 4 bits, you'll get the expected result:
binary | decimal
~0101 = 1010 | ~5 = -6
~0101 + 1 = 1011 | ~5 + 1 = -5
With only 3 bits you can encode integers from -4 to +3 in 2's complement representation.
With 4 bits you can encode integers from -8 to +7 in 2's complement representation.
-6 was getting truncated to 2 and -5 was getting truncated to 3 in 3 bits. You needed at least 4 bits.
And as others have already pointed out, ~ simply inverts all bits in a value, so, ~~17 = 17.
~b is not a 2-complement operation. It is a bitwise NOT operation. It just inverts every bit in a number, therefore ~b is unequal to -b.
Examples:
b = 5
binary representation of b: 0000 0000 0000 0101
binary representation of ~b: 1111 1111 1111 1010
~b = -6
b = 17
binary representation of b: 0000 0000 0001 0001
binary representation of ~b: 1111 1111 1110 1110
~b = -18
binary representation of ~(~b): 0000 0000 0001 0001
~(~b) = 17
~ simply inverts all the bits of a number:
~(~a)=17 if a=17
~0...010001 = 1...101110 ( = -18 )
~1...101110 = 0...010001 ( = 17 )
You need to add 1 only in case you want to negate a number (to get a 2-s complement) i.e. get -17 out of 17.
~b + 1 = -b
So:
~(~b) equals ~(-b - 1) equals -(-b - 1) -1 equals b
In fact, ~ reverse all bits, and if you do ~ again, it will reverse back.
I can't get the 2-completement calculation to work.
I know C compiles ~b that whould invert all bits to -6 if b=5. But why?
Because you are using two's complement. Do you know what two's complement is?
Lets say that we have a byte variable (signed char). Such a variable can have the values from 0 to 127 or from -128 to 0.
Binary, it works like this:
0000 0000 // 0
...
0111 1111 // 127
1000 0000 // -128
1000 0001 // -127
...
1111 1111 // -1
Signed numbers are often described with a circle.
If you understand the above, then you understand why ~1 equals -2 and so on.
Had you used one's complement, then ~1 would have been -1, because one's complement uses a signed zero. For a byte, described with one's complement, values would go from 0 to 127 to -127 to -0 back to 0.
you declared b as an integer. That means the value of b will be stored in 32 bits and the complement (~) will take place on the 32 bit word and not the last 3 bits as you are doing.
int b=5 // b in binary: 0000 0000 0000 0101
~b // ~b in binary: 1111 1111 1111 1010 = -6 in decimal
The most significant bit stores the sign of the integer (1:negetive 0:positive) so 1111 1111 1111 1010 is -6 in decimal.
Similarly:
b=17 // 17 in binary 0000 0000 0001 0001
~b // = 1111 1111 1110 1110 = -18

IP Address left shift representation

I found the following in some old and bad documented C code:
#define addr (((((147 << 8) | 87) << 8) | 117) << 8) | 107
What is it? Well I know it's an IP address - and shifting 8 bits to the left makes some sense too. But can anyone explain this to me as a whole? What is happening there?
Thank you!
The code
(((((147 << 8) | 87) << 8) | 117) << 8) | 107
generates 4 bytes containing the IP 147.87.117.107.
The first step is the innermost bracket:
147<<8
147 = 1001 0011
1001 0011 << 8 = 1001 0011 0000 0000
The second byte 87 is inserted by bitwise-or operation on (147<<8). As you can see, the 8 bits on the right are all 0 (due to <<8), so the bitwise-or operation just inserts the 8 bits from 87:
1001 0011 0000 0000 (147<<8)
0000 0000 0101 0111 (87)
------------------- bitwise-or
1001 0011 0101 0111 (147<<8)|87
The same is done with rest so you have 4 bytes at the end saved into a single 32-bit integer.
An IPv4 address consists of four bytes, which means it can be stored in a 32-bit integer. This is taking the four parts of the IP address (147.87.117.107) and using bit-shifting and the bit-wise OR operator to "encode" the address in a single 4-byte quantity.
(Note: the address might be 107.117.87.147 - I can't remember offhand what order the bytes are stored in.)
The (hex) bytes of the resulting quantity look like:
aabb ccdd
Where aa is the hex representation of 147 (0x93), bb is 87 (0x57), cc is 117 (0x75), and dd is 107 (0x6b), so the resulting value is 9357756b.
Update: None of this applies to IPv6, since an IPv6 address is 128 bits instead of 32.

Resources