Closed. This question is not reproducible or was caused by typos. It is not currently accepting answers.
This question was caused by a typo or a problem that can no longer be reproduced. While similar questions may be on-topic here, this one was resolved in a way less likely to help future readers.
Closed 5 years ago.
Improve this question
#include <stdio.h>
#include <stdint.h>
#include <inttypes.h>
#define PTXSHIFT 12
#define PTX(va) (((uint) (va) >> PTXSHIFT) & 0x3FF)
int main()
{
printf("0x%x", PTX(0x12345678));
return 0;
}
I tested it on online compiler and I'm getting a compiler error saying 'uint' is undeclared. I guess C online compiler can't import stdint.h. : https://www.onlinegdb.com/online_c_compiler
So I manually put values: (0x12345678>>12)&0x3FF.
Problem the output is 0x345, can you explain why?
0x12345678 >> 12 = 0x12345 (??)
0x12345 & 0x3FF = 0x345 (??)
UPDATE Sorry for the confusion guys.
I'm asking for the explanation on the output 0x345. I'm confused why 0x12345678 >> 12 is 0x12345 and 0x12345 & 0x3FF is 0x345.
What output did you expect?
Let's look the bitwise AND, nibble by nibble, in hex:
1 2 3 4 5
AND 3 f f
--------------
3 4 5
or in binary, which might help:
0011 0100 0101
AND 0011 1111 1111
------------------
0011 0100 0101
It should be obvious that 3 & 3 is 3, just as 4 & f is 4 and so on.
You can get the same result using
#define PTX(va) (((uint32_t) (va) >> PTXSHIFT) & 0x3FF)
The answer is expected because of the Bitwise and operation as mentioned in other answer.
1 2 3 4 5
0001 0010 0011 0100 0101
0000 0000 0011 1111 1111
0 0 3 F F
AND
--------------------------
0000 0000 0011 0100 0101
The shift operation is shifting the bits of an unsigned int to the right filling the left with 0's.
0x12345678
0001 0010 0011 0100 0101 0110 0111 1000
Shifts by 12
0001 0010 0011 0100 0101 [0110 0111 1000]---->
[0000 0000 0000]0001 0010 0011 0100 0101
0 0 0 1 2 3 4 5
That explains why 0x12345678 >> 12 = 0x12345
Truth table of AND
And if you don't understand the AND operation then it will be the truth table it should know about.
A | B | A & B
--+---+------
0 | 0 | 0
0 | 1 | 0
1 | 0 | 0
1 | 1 | 1
const int BitTable[64] = {
63, 30, 3, 32, 25, 41, 22, 33, 15, 50, 42, 13, 11, 53, 19, 34, 61, 29, 2,
51, 21, 43, 45, 10, 18, 47, 1, 54, 9, 57, 0, 35, 62, 31, 40, 4, 49, 5, 52,
26, 60, 6, 23, 44, 46, 27, 56, 16, 7, 39, 48, 24, 59, 14, 12, 55, 38, 28,
58, 20, 37, 17, 36, 8
};
int pop_1st_bit(uint64 *bb) {
uint64 b = *bb ^ (*bb - 1);
unsigned int fold = (unsigned) ((b & 0xffffffff) ^ (b >> 32));
*bb &= (*bb - 1);
return BitTable[(fold * 0x783a9b23) >> 26];
}
uint64 index_to_uint64(int index, int bits, uint64 m) {
int i, j;
uint64 result = 0ULL;
for(i = 0; i < bits; i++) {
j = pop_1st_bit(&m);
if(index & (1 << i)) result |= (1ULL << j);
}
return result;
}
It's from the Chess Programming Wiki:
https://www.chessprogramming.org/Looking_for_Magics
It's part of some code for finding magic numbers.
The argument uint64 m is a bitboard representing the possible blocked squares for either a rook or bishop move. Example for a rook on the e4 square:
0 0 0 0 0 0 0 0
0 0 0 0 1 0 0 0
0 0 0 0 1 0 0 0
0 0 0 0 1 0 0 0
0 1 1 1 0 1 1 0
0 0 0 0 1 0 0 0
0 0 0 0 1 0 0 0
0 0 0 0 0 0 0 0
The edge squares are zero because they always block, and reducing the number of bits needed is apparently helpful.
/* Bitboard, LSB to MSB, a1 through h8:
* 56 - - - - - - 63
* - - - - - - - -
* - - - - - - - -
* - - - - - - - -
* - - - - - - - -
* - - - - - - - -
* - - - - - - - -
* 0 - - - - - - 7
*/
So in the example above, index_to_uint64 takes an index (0 to 2^bits), and the number of bits set in the bitboard (10), and the bitboard.
It then pops_1st_bit for each number of bits, followed by another shifty bit of code. pops_1st_bit XORs the bitboard with itself minus one (why?). Then it ANDs it with a full 32-bits, and somewhere around here my brain runs out of RAM. Somehow the magical hex number 0x783a9b23 is involved (is that the number sequence from Lost?). And there is this ridiculous mystery array of randomly ordered numbers from 0-63 (BitTable[64]).
Alright, I have it figured out.
First, some terminology:
blocker mask: A bitboard containing all squares that can block a piece, for a given piece type and the square the piece is on. It excludes terminating edge squares because they always block.
blocker board: A bitboard containing occupied squares. It only has squares which are also in the blocker mask.
move board: A bitboard containing all squares a piece can move to, given a piece type, a square, and a blocker board. It includes terminating edge squares if the piece can move there.
Example for a rook on the e4 square, and there are some random pieces on e2, e5, e7, b4, and c4.
The blocker mask A blocker board The move board
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0
0 1 1 1 0 1 1 0 0 1 1 0 0 0 0 0 0 0 1 1 0 1 1 1
0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0
0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
Some things to note:
The blocker mask is always the same for a given square and piece type (either rook or bishop).
Blocker boards include friendly & enemy pieces, and it is a subset of the blocker mask.
The resulting move board may include moves that capture your own pieces, however these moves are easily removed afterward: moveboard &= ~friendly_pieces)
The goal of the magic numbers method is to very quickly look up a pre-calculated move board for a given blocker board. Otherwise you'd have to (slowly) calculate the move board every time. This only applies to sliding pieces, namely the rook and bishop. The queen is just a combination of the rook and bishop.
Magic numbers can be found for each square & piece type combo. To do this, you have to calculate every possible blocker board variation for each square/piece combo. This is what the code in question is doing. How it's doing it is still a bit of a mystery to me, but that also seems to be the case for the apparent original author, Matt Taylor. (Thanks to #Pradhan for the link)
So what I've done is re-implemented the code for generating all possible blocker board variations. It uses a different technique, and while it's a little slower, it's much easier to read and comprehend. The fact that it's slightly slower is not a problem, because this code isn't speed critical. The program only has to do it once at program startup, and it only takes microseconds on a dual-core i5.
/* Generate a unique blocker board, given an index (0..2^bits) and the blocker mask
* for the piece/square. Each index will give a unique blocker board. */
static uint64_t gen_blockerboard (int index, uint64_t blockermask)
{
/* Start with a blockerboard identical to the mask. */
uint64_t blockerboard = blockermask;
/* Loop through the blockermask to find the indices of all set bits. */
int8_t bitindex = 0;
for (int8_t i=0; i<64; i++) {
/* Check if the i'th bit is set in the mask (and thus a potential blocker). */
if ( blockermask & (1ULL<<i) ) {
/* Clear the i'th bit in the blockerboard if it's clear in the index at bitindex. */
if ( !(index & (1<<bitindex)) ) {
blockerboard &= ~(1ULL<<i); //Clear the bit.
}
/* Increment the bit index in the 0-4096 index, so each bit in index will correspond
* to each set bit in blockermask. */
bitindex++;
}
}
return blockerboard;
}
To use it, do something like this:
int bits = count_bits( RookBlockermask[square] );
/* Generate all (2^bits) blocker boards. */
for (int i=0; i < (1<<bits); i++) {
RookBlockerboard[square][i] = gen_blockerboard( i, RookBlockermask[square] );
}
How it works: There are 2^bits blocker boards, where bits is the number of 1's in the blocker mask, which are the only relevant bits. Also, each integer from 0 to 2^bits has a unique sequence of 1's and 0's of length bits. So this function just corresponds each bit in the given integer to a relevant bit in the blocker mask, and turns it off/on accordingly to generate a unique blocker board.
It's not as clever or fast, but it's readable.
Alright, I'm going to try to step through this.
index_to_uint64( 7, 10, m );
7 is just a randomly chosen number between 0 and 2^10, and 10 is the number of bits set in m. m can be represented in four ways:
bitboard:
0 0 0 0 0 0 0 0
0 0 0 0 1 0 0 0
0 0 0 0 1 0 0 0
0 0 0 0 1 0 0 0
0 1 1 1 0 1 1 0
0 0 0 0 1 0 0 0
0 0 0 0 1 0 0 0
0 0 0 0 0 0 0 0
dec: 4521262379438080
hex: 0x1010106e101000
bin: 0000 0000 0001 0000 0001 0000 0001 0000 0110 1110 0001 0000 0001 0000 0000 0000
Moving on. This will be called 10 times. It has a return value and it modifies m.
pop_1st_bit(&m);
In pop_1st_bit, m is referred to by bb. I'll change it to m for clarity.
uint64 b = m^(m-1);
The m-1 part takes the least significant bit that is set and flips it and all the bits below it. After the XOR, all those changed bits are now set to 1 while all the higher bits are set to 0.
m : 0000 0000 0001 0000 0001 0000 0001 0000 0110 1110 0001 0000 0001 0000 0000 0000
m-1: 0000 0000 0001 0000 0001 0000 0001 0000 0110 1110 0001 0000 0000 1111 1111 1111
b : 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0001 1111 1111 1111
Next:
unsigned int fold = (unsigned) ((b & 0xffffffff) ^ (b >> 32));
The (b & 0xffffffff) part ANDs b with lower 32 set bits. So this essentially clears any bits in the upper half of b.
(b & 0xffffffff)
b: 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0001 1111 1111 1111
&: 0000 0000 0000 0000 0000 0000 0000 0000 1111 1111 1111 1111 1111 1111 1111 1111
=: 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0001 1111 1111 1111
The ... ^ (b >> 32) part shifts the upper half of b into the lower half, then XORs it with the result of the previous operation. So it basically XORs the top half of b with the lower half of b. This has no effect in this case because the upper half of b was empty to begin with.
>> :0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000
^ :0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0001 1111 1111 1111
uint fold = 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0001 1111 1111 1111
I don't understand the point of that "folding", even if there had been bits set in the upper half of b.
Anyways, moving on. This next line actually modifies m by unsetting the lowest bit. That makes some sense.
m &= (m - 1);
m : 0000 0000 0001 0000 0001 0000 0001 0000 0110 1110 0001 0000 0001 0000 0000 0000
m-1: 0000 0000 0001 0000 0001 0000 0001 0000 0110 1110 0001 0000 0000 1111 1111 1111
& : 0000 0000 0001 0000 0001 0000 0001 0000 0110 1110 0001 0000 0000 0000 0000 0000
This next part multiplies fold by some hex number (a prime?), right shifts the product 26, and uses that as an index into BitTable, our mysterious array of randomly ordered numbers 0-63. At this point I suspect the author might be writing a pseudo random number generator.
return BitTable[(fold * 0x783a9b23) >> 26];
That concludes pop_1st_bit. That's all done 10 times (once for each bit originally set in m). Each of the 10 calls to pop_1st_bit returns a number 0-63.
j = pop_1st_bit(&m);
if(index & (1 << i)) result |= (1ULL << j);
In the above two lines, i is the current bit we are on, 0-9. So if the index number (the 7 originally passed as an argument to index_to_uint64) has the i'th bit set, then set the j'th bit in the result, where j was the 0-63 return value from pop_1st_bit.
And that's it! I'm still confused :(
When watching a video series on chess engines on youtube I had exactly the same questions as paulwal222. There seems to be some high level mathematics involved. The best links explaining the background of this difficult subject are https://chessprogramming.wikispaces.com/Matt+Taylor and https://chessprogramming.wikispaces.com/BitScan . It seems that Matt Taylor in 2003 in a google.group ( https://groups.google.com/forum/#!topic/comp.lang.asm.x86/3pVGzQGb1ys ) (also found by pradhan) came up with something that is now called Matt Taylor's folding trick, a 32-bit friendly implementation to find the bit-index of LS1B ( https://en.wikipedia.org/wiki/Find_first_set ). Taylor's folding trick apparently is an adaptation of the De Bruijn ( https://en.wikipedia.org/wiki/Nicolaas_Govert_de_Bruijn ) bitscan, devised in 1997, according to Donald Knuth by Martin Läuter to determine the LS1B index by minimal perfect hashing ( https://en.wikipedia.org/wiki/Perfect_hash_function ). The numbers of the BitTable (63, 30, ..) and the fold in PopBit (0x783a9b23) are probably the so called magic numbers (uniquely?) related to Matt Taylor's 32-bit folding trick. This folding trick seems to be very fast, because lots of engines have copied this approach (f.i Stockfish).
The goal is to find and return the position of the least significant bit (LSB) in a given integer and then set the LSB to zero, therefore if m = 1010 0100, we want the function to return the index 2 and to modify m as: m = 1010 0000. In order to see what is going on in this process, it is easiest to look at the 8-bit case and what happens in the first 2 lines of pop_1st_bit:
m m^(m-1) "fold"
XXXX XXX1 0000 0001 0001
XXXX XX10 0000 0011 0011
XXXX X100 0000 0111 0111
XXXX 1000 0000 1111 1111
XXX1 0000 0001 1111 1110
XX10 0000 0011 1111 1100
X100 0000 0111 1111 1000
1000 0000 1111 1111 0000
In the above, X means we don't care about the value of the bits in those positions. So, m^(m-1) maps every 8-bit number to one of eight 8-bit keys based on the position of its LSB. The fold operation then converts each of these 8-bit keys to a unique 4-bit key. The reason for doing this in the original (64-bit) case is to avoid a 64-bit multiplication in fold*magic by replacing it with a 32-bit multiplication.
The table for the 8-bit case contains eight values (one for each of the possible LSB positions in an 8-bit integer). Thus, we need 3 bits to index the table. The right shift is going to give us those 3 bits, the formula for calculating how many bits to shift by:
[Number of bits in key] - log2([size of table])
So, in the original case, we get: 32 - log2(64) = 26, whereas in the 8-bit case we get 4 - log2(8) = 1.
Therefore: table_index = fold*magic >> 1, will give us numbers between 0-7 which we need to index the table. All we need to do now is find magic that gives us a different value for each of our eight 4-bit keys.
In this case, the 4-bit magic number that we need is 5 (0b0101). You can find the number by brute force, by looking for magic such that fold*magic >> 1 yields a unique value for each of the eight "fold" keys. (It is worth noting here that I am assuming that any overflowing bits from the 4-bit multiplication of fold and magic disappear):
m m^(m-1) "fold" fold*5 fold*5 >> 1
XXXX XXX1 0000 0001 0001 0101 010 (dec: 2)
XXXX XX10 0000 0011 0011 1111 111 (dec: 7)
XXXX X100 0000 0111 0111 0011 001 (dec: 1)
XXXX 1000 0000 1111 1111 1011 101 (dec: 5)
XXX1 0000 0001 1111 1110 0110 011 (dec: 3)
XX10 0000 0011 1111 1100 1100 110 (dec: 6)
X100 0000 0111 1111 1000 1000 100 (dec: 4)
1000 0000 1111 1111 0000 0000 000 (dec: 0)
So when the LSB is the 0th bit, fold*5 >> 1 will be 2, so we put zero at index 2 of the array. When the LSB is the 1st bit, fold*5 >> 1 will be 7, so we put 1 at index 7 of the array. Etc, etc.
From the results above we can build the array as {7, 2, 0, 4, 6, 3, 5, 1}. The ordering of the array looks meaningless, but really it is just related to the order in which fold*5 >> 1 spits out the index values.
As to whether or not you should actually do this, I imagine that it is hardware dependent, however, my processor can do the same thing about five times faster with:
unsigned long idx;
_BitScanForward64(&idx, bb);
bb &= bb - 1;
I get this result when I bitwise & -4227072 and 0x7fffff:
0b1111111000000000000000
These are the bit representations for the two values I'm &'ing:
-0b10000001000000000000000
0b11111111111111111111111
Shouldn't &'ing them together instead give this?
0b10000001000000000000000
Thanks.
-4227072 == 0xFFBF8000 == 1111 1111 1011 1111 1000 0000 0000
-4227072 & 0x7fffff should be
0xFFBF8000 == 1111 1111 1011 1111 1000 0000 0000 0000
& 0x7fffff == 0000 0000 0111 1111 1111 1111 1111 1111
-----------------------------------------------------
0x003F8000 == 0000 0000 0011 1111 1000 0000 0000 0000
The negative number is represented as its 2's complement inside the computer's memory. The binary representation you have posted is thus misleading. In 2's complement, the most significant digit (at bit k) has value –2k–1. The remaining digits are positive as you expect.
Assuming you are dealing with 32 bit signed integers, we have:
1111 1111 1011 1111 1000 0000 0000 0000 = −422707210
& 0000 0000 0111 1111 1111 1111 1111 1111 = 7fffff16
————————————————————————————————————————————————————————————
0000 0000 0011 1111 1000 0000 0000 0000
Which is what you got.
To verify the first line:
−1 × 231 = −214748364810
1 × 230 = 107374182410
1 × 229 = 53687091210
1 × 228 = 26843545610
1 × 227 = 13421772810
1 × 226 = 6710886410
1 × 225 = 3355443210
1 × 224 = 1677721610
1 × 223 = 838860810
1 × 221 = 209715210
1 × 220 = 104857610
1 × 219 = 52428810
1 × 218 = 26214410
1 × 217 = 13107210
1 × 216 = 6553610
1 × 215 = 3276810
——————————————————————————————
−422707210 ✓
0b10000001000000000000000 is correct - if your integer encoding was signed-magnitude.
This is possible on some early or novel machines. Another answer well explains how negative integers are typically represented as 2's complement numbers and then the result is as you observed: 0b1111111000000000000000.
I can't get the 2-complement calculation to work.
I know C compiles ~b that would invert all bits to -6 if b=5. But why?
int b=101, inverting all bits is 010 then for 2 complement's notation I just add 1 but that becomes 011 i.e. 3 which is wrong answer.
How should I calculate with bit inversion operator ~?
Actually, here's how 5 is usually represented in memory (16-bit integer):
0000 0000 0000 0101
When you invert 5, you flip all the bits to get:
1111 1111 1111 1010
That is actually -6 in decimal form. I think in your question, you were simply flipping the last three bits only, when in fact you have to consider all the bits that comprise the integer.
The problem with b = 101 (5) is that you have chosen one too few binary digits.
binary | decimal
~101 = 010 | ~5 = 2
~101 + 1 = 011 | ~5 + 1 = 3
If you choose 4 bits, you'll get the expected result:
binary | decimal
~0101 = 1010 | ~5 = -6
~0101 + 1 = 1011 | ~5 + 1 = -5
With only 3 bits you can encode integers from -4 to +3 in 2's complement representation.
With 4 bits you can encode integers from -8 to +7 in 2's complement representation.
-6 was getting truncated to 2 and -5 was getting truncated to 3 in 3 bits. You needed at least 4 bits.
And as others have already pointed out, ~ simply inverts all bits in a value, so, ~~17 = 17.
~b is not a 2-complement operation. It is a bitwise NOT operation. It just inverts every bit in a number, therefore ~b is unequal to -b.
Examples:
b = 5
binary representation of b: 0000 0000 0000 0101
binary representation of ~b: 1111 1111 1111 1010
~b = -6
b = 17
binary representation of b: 0000 0000 0001 0001
binary representation of ~b: 1111 1111 1110 1110
~b = -18
binary representation of ~(~b): 0000 0000 0001 0001
~(~b) = 17
~ simply inverts all the bits of a number:
~(~a)=17 if a=17
~0...010001 = 1...101110 ( = -18 )
~1...101110 = 0...010001 ( = 17 )
You need to add 1 only in case you want to negate a number (to get a 2-s complement) i.e. get -17 out of 17.
~b + 1 = -b
So:
~(~b) equals ~(-b - 1) equals -(-b - 1) -1 equals b
In fact, ~ reverse all bits, and if you do ~ again, it will reverse back.
I can't get the 2-completement calculation to work.
I know C compiles ~b that whould invert all bits to -6 if b=5. But why?
Because you are using two's complement. Do you know what two's complement is?
Lets say that we have a byte variable (signed char). Such a variable can have the values from 0 to 127 or from -128 to 0.
Binary, it works like this:
0000 0000 // 0
...
0111 1111 // 127
1000 0000 // -128
1000 0001 // -127
...
1111 1111 // -1
Signed numbers are often described with a circle.
If you understand the above, then you understand why ~1 equals -2 and so on.
Had you used one's complement, then ~1 would have been -1, because one's complement uses a signed zero. For a byte, described with one's complement, values would go from 0 to 127 to -127 to -0 back to 0.
you declared b as an integer. That means the value of b will be stored in 32 bits and the complement (~) will take place on the 32 bit word and not the last 3 bits as you are doing.
int b=5 // b in binary: 0000 0000 0000 0101
~b // ~b in binary: 1111 1111 1111 1010 = -6 in decimal
The most significant bit stores the sign of the integer (1:negetive 0:positive) so 1111 1111 1111 1010 is -6 in decimal.
Similarly:
b=17 // 17 in binary 0000 0000 0001 0001
~b // = 1111 1111 1110 1110 = -18
This question already has answers here:
Closed 12 years ago.
Possible Duplicates:
Query about working out whether number is a power of 2
What does this function do?
n & (n-1) - where can this expression be used ?
It's figuring out if n is either 0 or an exact power of two.
It works because a binary power of two is of the form 1000...000 and subtracting one will give you 111...111. Then, when you AND those together, you get zero, such as with:
1000 0000 0000 0000
& 111 1111 1111 1111
==== ==== ==== ====
= 0000 0000 0000 0000
Any non-power-of-two input value (other than zero) will not give you zero when you perform that operation.
For example, let's try all the 4-bit combinations:
<----- binary ---->
n n n-1 n&(n-1)
-- ---- ---- -------
0 0000 0111 0000 *
1 0001 0000 0000 *
2 0010 0001 0000 *
3 0011 0010 0010
4 0100 0011 0000 *
5 0101 0100 0100
6 0110 0101 0100
7 0111 0110 0110
8 1000 0111 0000 *
9 1001 1000 1000
10 1010 1001 1000
11 1011 1010 1010
12 1100 1011 1000
13 1101 1100 1100
14 1110 1101 1100
15 1111 1110 1110
You can see that only 0 and the powers of two (1, 2, 4 and 8) result in a 0000/false bit pattern, all others are non-zero or true.
It returns 0 if n is a power of 2 (NB: only works for n > 0). So you can test for a power of 2 like this:
bool isPowerOfTwo(int n)
{
return (n > 0) && ((n & (n - 1)) == 0);
}
It checks if n is a power of 2: What does the bitwise code "$n & ($n - 1)" do?
It's a bitwise operation between a number and its previous number. Only way this expression could ever be false is if n is a power of 2, so essentially you're verifying if it isn't a power of 2.