Closed. This question is not reproducible or was caused by typos. It is not currently accepting answers.
This question was caused by a typo or a problem that can no longer be reproduced. While similar questions may be on-topic here, this one was resolved in a way less likely to help future readers.
Closed 5 years ago.
Improve this question
#include <stdio.h>
#include <stdint.h>
#include <inttypes.h>
#define PTXSHIFT 12
#define PTX(va) (((uint) (va) >> PTXSHIFT) & 0x3FF)
int main()
{
printf("0x%x", PTX(0x12345678));
return 0;
}
I tested it on online compiler and I'm getting a compiler error saying 'uint' is undeclared. I guess C online compiler can't import stdint.h. : https://www.onlinegdb.com/online_c_compiler
So I manually put values: (0x12345678>>12)&0x3FF.
Problem the output is 0x345, can you explain why?
0x12345678 >> 12 = 0x12345 (??)
0x12345 & 0x3FF = 0x345 (??)
UPDATE Sorry for the confusion guys.
I'm asking for the explanation on the output 0x345. I'm confused why 0x12345678 >> 12 is 0x12345 and 0x12345 & 0x3FF is 0x345.
What output did you expect?
Let's look the bitwise AND, nibble by nibble, in hex:
1 2 3 4 5
AND 3 f f
--------------
3 4 5
or in binary, which might help:
0011 0100 0101
AND 0011 1111 1111
------------------
0011 0100 0101
It should be obvious that 3 & 3 is 3, just as 4 & f is 4 and so on.
You can get the same result using
#define PTX(va) (((uint32_t) (va) >> PTXSHIFT) & 0x3FF)
The answer is expected because of the Bitwise and operation as mentioned in other answer.
1 2 3 4 5
0001 0010 0011 0100 0101
0000 0000 0011 1111 1111
0 0 3 F F
AND
--------------------------
0000 0000 0011 0100 0101
The shift operation is shifting the bits of an unsigned int to the right filling the left with 0's.
0x12345678
0001 0010 0011 0100 0101 0110 0111 1000
Shifts by 12
0001 0010 0011 0100 0101 [0110 0111 1000]---->
[0000 0000 0000]0001 0010 0011 0100 0101
0 0 0 1 2 3 4 5
That explains why 0x12345678 >> 12 = 0x12345
Truth table of AND
And if you don't understand the AND operation then it will be the truth table it should know about.
A | B | A & B
--+---+------
0 | 0 | 0
0 | 1 | 0
1 | 0 | 0
1 | 1 | 1
const int BitTable[64] = {
63, 30, 3, 32, 25, 41, 22, 33, 15, 50, 42, 13, 11, 53, 19, 34, 61, 29, 2,
51, 21, 43, 45, 10, 18, 47, 1, 54, 9, 57, 0, 35, 62, 31, 40, 4, 49, 5, 52,
26, 60, 6, 23, 44, 46, 27, 56, 16, 7, 39, 48, 24, 59, 14, 12, 55, 38, 28,
58, 20, 37, 17, 36, 8
};
int pop_1st_bit(uint64 *bb) {
uint64 b = *bb ^ (*bb - 1);
unsigned int fold = (unsigned) ((b & 0xffffffff) ^ (b >> 32));
*bb &= (*bb - 1);
return BitTable[(fold * 0x783a9b23) >> 26];
}
uint64 index_to_uint64(int index, int bits, uint64 m) {
int i, j;
uint64 result = 0ULL;
for(i = 0; i < bits; i++) {
j = pop_1st_bit(&m);
if(index & (1 << i)) result |= (1ULL << j);
}
return result;
}
It's from the Chess Programming Wiki:
https://www.chessprogramming.org/Looking_for_Magics
It's part of some code for finding magic numbers.
The argument uint64 m is a bitboard representing the possible blocked squares for either a rook or bishop move. Example for a rook on the e4 square:
0 0 0 0 0 0 0 0
0 0 0 0 1 0 0 0
0 0 0 0 1 0 0 0
0 0 0 0 1 0 0 0
0 1 1 1 0 1 1 0
0 0 0 0 1 0 0 0
0 0 0 0 1 0 0 0
0 0 0 0 0 0 0 0
The edge squares are zero because they always block, and reducing the number of bits needed is apparently helpful.
/* Bitboard, LSB to MSB, a1 through h8:
* 56 - - - - - - 63
* - - - - - - - -
* - - - - - - - -
* - - - - - - - -
* - - - - - - - -
* - - - - - - - -
* - - - - - - - -
* 0 - - - - - - 7
*/
So in the example above, index_to_uint64 takes an index (0 to 2^bits), and the number of bits set in the bitboard (10), and the bitboard.
It then pops_1st_bit for each number of bits, followed by another shifty bit of code. pops_1st_bit XORs the bitboard with itself minus one (why?). Then it ANDs it with a full 32-bits, and somewhere around here my brain runs out of RAM. Somehow the magical hex number 0x783a9b23 is involved (is that the number sequence from Lost?). And there is this ridiculous mystery array of randomly ordered numbers from 0-63 (BitTable[64]).
Alright, I have it figured out.
First, some terminology:
blocker mask: A bitboard containing all squares that can block a piece, for a given piece type and the square the piece is on. It excludes terminating edge squares because they always block.
blocker board: A bitboard containing occupied squares. It only has squares which are also in the blocker mask.
move board: A bitboard containing all squares a piece can move to, given a piece type, a square, and a blocker board. It includes terminating edge squares if the piece can move there.
Example for a rook on the e4 square, and there are some random pieces on e2, e5, e7, b4, and c4.
The blocker mask A blocker board The move board
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0
0 1 1 1 0 1 1 0 0 1 1 0 0 0 0 0 0 0 1 1 0 1 1 1
0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0
0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
Some things to note:
The blocker mask is always the same for a given square and piece type (either rook or bishop).
Blocker boards include friendly & enemy pieces, and it is a subset of the blocker mask.
The resulting move board may include moves that capture your own pieces, however these moves are easily removed afterward: moveboard &= ~friendly_pieces)
The goal of the magic numbers method is to very quickly look up a pre-calculated move board for a given blocker board. Otherwise you'd have to (slowly) calculate the move board every time. This only applies to sliding pieces, namely the rook and bishop. The queen is just a combination of the rook and bishop.
Magic numbers can be found for each square & piece type combo. To do this, you have to calculate every possible blocker board variation for each square/piece combo. This is what the code in question is doing. How it's doing it is still a bit of a mystery to me, but that also seems to be the case for the apparent original author, Matt Taylor. (Thanks to #Pradhan for the link)
So what I've done is re-implemented the code for generating all possible blocker board variations. It uses a different technique, and while it's a little slower, it's much easier to read and comprehend. The fact that it's slightly slower is not a problem, because this code isn't speed critical. The program only has to do it once at program startup, and it only takes microseconds on a dual-core i5.
/* Generate a unique blocker board, given an index (0..2^bits) and the blocker mask
* for the piece/square. Each index will give a unique blocker board. */
static uint64_t gen_blockerboard (int index, uint64_t blockermask)
{
/* Start with a blockerboard identical to the mask. */
uint64_t blockerboard = blockermask;
/* Loop through the blockermask to find the indices of all set bits. */
int8_t bitindex = 0;
for (int8_t i=0; i<64; i++) {
/* Check if the i'th bit is set in the mask (and thus a potential blocker). */
if ( blockermask & (1ULL<<i) ) {
/* Clear the i'th bit in the blockerboard if it's clear in the index at bitindex. */
if ( !(index & (1<<bitindex)) ) {
blockerboard &= ~(1ULL<<i); //Clear the bit.
}
/* Increment the bit index in the 0-4096 index, so each bit in index will correspond
* to each set bit in blockermask. */
bitindex++;
}
}
return blockerboard;
}
To use it, do something like this:
int bits = count_bits( RookBlockermask[square] );
/* Generate all (2^bits) blocker boards. */
for (int i=0; i < (1<<bits); i++) {
RookBlockerboard[square][i] = gen_blockerboard( i, RookBlockermask[square] );
}
How it works: There are 2^bits blocker boards, where bits is the number of 1's in the blocker mask, which are the only relevant bits. Also, each integer from 0 to 2^bits has a unique sequence of 1's and 0's of length bits. So this function just corresponds each bit in the given integer to a relevant bit in the blocker mask, and turns it off/on accordingly to generate a unique blocker board.
It's not as clever or fast, but it's readable.
Alright, I'm going to try to step through this.
index_to_uint64( 7, 10, m );
7 is just a randomly chosen number between 0 and 2^10, and 10 is the number of bits set in m. m can be represented in four ways:
bitboard:
0 0 0 0 0 0 0 0
0 0 0 0 1 0 0 0
0 0 0 0 1 0 0 0
0 0 0 0 1 0 0 0
0 1 1 1 0 1 1 0
0 0 0 0 1 0 0 0
0 0 0 0 1 0 0 0
0 0 0 0 0 0 0 0
dec: 4521262379438080
hex: 0x1010106e101000
bin: 0000 0000 0001 0000 0001 0000 0001 0000 0110 1110 0001 0000 0001 0000 0000 0000
Moving on. This will be called 10 times. It has a return value and it modifies m.
pop_1st_bit(&m);
In pop_1st_bit, m is referred to by bb. I'll change it to m for clarity.
uint64 b = m^(m-1);
The m-1 part takes the least significant bit that is set and flips it and all the bits below it. After the XOR, all those changed bits are now set to 1 while all the higher bits are set to 0.
m : 0000 0000 0001 0000 0001 0000 0001 0000 0110 1110 0001 0000 0001 0000 0000 0000
m-1: 0000 0000 0001 0000 0001 0000 0001 0000 0110 1110 0001 0000 0000 1111 1111 1111
b : 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0001 1111 1111 1111
Next:
unsigned int fold = (unsigned) ((b & 0xffffffff) ^ (b >> 32));
The (b & 0xffffffff) part ANDs b with lower 32 set bits. So this essentially clears any bits in the upper half of b.
(b & 0xffffffff)
b: 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0001 1111 1111 1111
&: 0000 0000 0000 0000 0000 0000 0000 0000 1111 1111 1111 1111 1111 1111 1111 1111
=: 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0001 1111 1111 1111
The ... ^ (b >> 32) part shifts the upper half of b into the lower half, then XORs it with the result of the previous operation. So it basically XORs the top half of b with the lower half of b. This has no effect in this case because the upper half of b was empty to begin with.
>> :0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000
^ :0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0001 1111 1111 1111
uint fold = 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0001 1111 1111 1111
I don't understand the point of that "folding", even if there had been bits set in the upper half of b.
Anyways, moving on. This next line actually modifies m by unsetting the lowest bit. That makes some sense.
m &= (m - 1);
m : 0000 0000 0001 0000 0001 0000 0001 0000 0110 1110 0001 0000 0001 0000 0000 0000
m-1: 0000 0000 0001 0000 0001 0000 0001 0000 0110 1110 0001 0000 0000 1111 1111 1111
& : 0000 0000 0001 0000 0001 0000 0001 0000 0110 1110 0001 0000 0000 0000 0000 0000
This next part multiplies fold by some hex number (a prime?), right shifts the product 26, and uses that as an index into BitTable, our mysterious array of randomly ordered numbers 0-63. At this point I suspect the author might be writing a pseudo random number generator.
return BitTable[(fold * 0x783a9b23) >> 26];
That concludes pop_1st_bit. That's all done 10 times (once for each bit originally set in m). Each of the 10 calls to pop_1st_bit returns a number 0-63.
j = pop_1st_bit(&m);
if(index & (1 << i)) result |= (1ULL << j);
In the above two lines, i is the current bit we are on, 0-9. So if the index number (the 7 originally passed as an argument to index_to_uint64) has the i'th bit set, then set the j'th bit in the result, where j was the 0-63 return value from pop_1st_bit.
And that's it! I'm still confused :(
When watching a video series on chess engines on youtube I had exactly the same questions as paulwal222. There seems to be some high level mathematics involved. The best links explaining the background of this difficult subject are https://chessprogramming.wikispaces.com/Matt+Taylor and https://chessprogramming.wikispaces.com/BitScan . It seems that Matt Taylor in 2003 in a google.group ( https://groups.google.com/forum/#!topic/comp.lang.asm.x86/3pVGzQGb1ys ) (also found by pradhan) came up with something that is now called Matt Taylor's folding trick, a 32-bit friendly implementation to find the bit-index of LS1B ( https://en.wikipedia.org/wiki/Find_first_set ). Taylor's folding trick apparently is an adaptation of the De Bruijn ( https://en.wikipedia.org/wiki/Nicolaas_Govert_de_Bruijn ) bitscan, devised in 1997, according to Donald Knuth by Martin Läuter to determine the LS1B index by minimal perfect hashing ( https://en.wikipedia.org/wiki/Perfect_hash_function ). The numbers of the BitTable (63, 30, ..) and the fold in PopBit (0x783a9b23) are probably the so called magic numbers (uniquely?) related to Matt Taylor's 32-bit folding trick. This folding trick seems to be very fast, because lots of engines have copied this approach (f.i Stockfish).
The goal is to find and return the position of the least significant bit (LSB) in a given integer and then set the LSB to zero, therefore if m = 1010 0100, we want the function to return the index 2 and to modify m as: m = 1010 0000. In order to see what is going on in this process, it is easiest to look at the 8-bit case and what happens in the first 2 lines of pop_1st_bit:
m m^(m-1) "fold"
XXXX XXX1 0000 0001 0001
XXXX XX10 0000 0011 0011
XXXX X100 0000 0111 0111
XXXX 1000 0000 1111 1111
XXX1 0000 0001 1111 1110
XX10 0000 0011 1111 1100
X100 0000 0111 1111 1000
1000 0000 1111 1111 0000
In the above, X means we don't care about the value of the bits in those positions. So, m^(m-1) maps every 8-bit number to one of eight 8-bit keys based on the position of its LSB. The fold operation then converts each of these 8-bit keys to a unique 4-bit key. The reason for doing this in the original (64-bit) case is to avoid a 64-bit multiplication in fold*magic by replacing it with a 32-bit multiplication.
The table for the 8-bit case contains eight values (one for each of the possible LSB positions in an 8-bit integer). Thus, we need 3 bits to index the table. The right shift is going to give us those 3 bits, the formula for calculating how many bits to shift by:
[Number of bits in key] - log2([size of table])
So, in the original case, we get: 32 - log2(64) = 26, whereas in the 8-bit case we get 4 - log2(8) = 1.
Therefore: table_index = fold*magic >> 1, will give us numbers between 0-7 which we need to index the table. All we need to do now is find magic that gives us a different value for each of our eight 4-bit keys.
In this case, the 4-bit magic number that we need is 5 (0b0101). You can find the number by brute force, by looking for magic such that fold*magic >> 1 yields a unique value for each of the eight "fold" keys. (It is worth noting here that I am assuming that any overflowing bits from the 4-bit multiplication of fold and magic disappear):
m m^(m-1) "fold" fold*5 fold*5 >> 1
XXXX XXX1 0000 0001 0001 0101 010 (dec: 2)
XXXX XX10 0000 0011 0011 1111 111 (dec: 7)
XXXX X100 0000 0111 0111 0011 001 (dec: 1)
XXXX 1000 0000 1111 1111 1011 101 (dec: 5)
XXX1 0000 0001 1111 1110 0110 011 (dec: 3)
XX10 0000 0011 1111 1100 1100 110 (dec: 6)
X100 0000 0111 1111 1000 1000 100 (dec: 4)
1000 0000 1111 1111 0000 0000 000 (dec: 0)
So when the LSB is the 0th bit, fold*5 >> 1 will be 2, so we put zero at index 2 of the array. When the LSB is the 1st bit, fold*5 >> 1 will be 7, so we put 1 at index 7 of the array. Etc, etc.
From the results above we can build the array as {7, 2, 0, 4, 6, 3, 5, 1}. The ordering of the array looks meaningless, but really it is just related to the order in which fold*5 >> 1 spits out the index values.
As to whether or not you should actually do this, I imagine that it is hardware dependent, however, my processor can do the same thing about five times faster with:
unsigned long idx;
_BitScanForward64(&idx, bb);
bb &= bb - 1;
I get this result when I bitwise & -4227072 and 0x7fffff:
0b1111111000000000000000
These are the bit representations for the two values I'm &'ing:
-0b10000001000000000000000
0b11111111111111111111111
Shouldn't &'ing them together instead give this?
0b10000001000000000000000
Thanks.
-4227072 == 0xFFBF8000 == 1111 1111 1011 1111 1000 0000 0000
-4227072 & 0x7fffff should be
0xFFBF8000 == 1111 1111 1011 1111 1000 0000 0000 0000
& 0x7fffff == 0000 0000 0111 1111 1111 1111 1111 1111
-----------------------------------------------------
0x003F8000 == 0000 0000 0011 1111 1000 0000 0000 0000
The negative number is represented as its 2's complement inside the computer's memory. The binary representation you have posted is thus misleading. In 2's complement, the most significant digit (at bit k) has value –2k–1. The remaining digits are positive as you expect.
Assuming you are dealing with 32 bit signed integers, we have:
1111 1111 1011 1111 1000 0000 0000 0000 = −422707210
& 0000 0000 0111 1111 1111 1111 1111 1111 = 7fffff16
————————————————————————————————————————————————————————————
0000 0000 0011 1111 1000 0000 0000 0000
Which is what you got.
To verify the first line:
−1 × 231 = −214748364810
1 × 230 = 107374182410
1 × 229 = 53687091210
1 × 228 = 26843545610
1 × 227 = 13421772810
1 × 226 = 6710886410
1 × 225 = 3355443210
1 × 224 = 1677721610
1 × 223 = 838860810
1 × 221 = 209715210
1 × 220 = 104857610
1 × 219 = 52428810
1 × 218 = 26214410
1 × 217 = 13107210
1 × 216 = 6553610
1 × 215 = 3276810
——————————————————————————————
−422707210 ✓
0b10000001000000000000000 is correct - if your integer encoding was signed-magnitude.
This is possible on some early or novel machines. Another answer well explains how negative integers are typically represented as 2's complement numbers and then the result is as you observed: 0b1111111000000000000000.
Can somebody tell me how to find the table of 9 with using Bit wise operator.A detailed description will be greatly appreciated.
To multiply by 2 to the power of N (i.e. 2^N) shift the bits N times to the left
0000 0001 = 1
times 4 = (2^2 => N = 2) = 2 bit shift : 0000 0100 = 4
times 8 = (2^3 -> N = 3) = 3 bit shift : 0010 0000 = 32
etc..
visualize
Times 9 just add original value like this
0000 1001 // 9 original value
0001 0000 // 2 shift 3 to left
0000 0010 + // 2
-----------
0001 0010 = 18
0001 1000 // 3(0000 0011) shift 3 to left
0000 0011 + // 3
-----------
0001 1011 = 27
0010 0000 // 4(0000 0100) shift 3 to left
0000 0100 + // 4
-----------
0010 0100 = 36
etc..
Meaning x = (n<<3)+n
Shift-and-add multiplication
A bit shift left is like a multiplication of 2. So 2x2x2 = 8 then add the original value again (which is the same as doing +1) = 9
i.e.
((v<<1)<<1)<<1) + v == (v << 3) + v
9 is 10012, therefore shift left 3 times and add the original value.
I am looking at some examples in the 1750A format webpage and some of the examples do not really make sense. I have included the 1750A format specification at the bottom of this post in case anyone isn't familiar with it.
Take this example from Table 3 of the 1750A format webpage:
.625x2^4 = 5000 00 04
In binary 5000 00 04 is 0101 0000 0000 0000 0000 0000 0000 0100
If you convert this to decimal, it does not equal 10, which is .625x2^4. Maybe I am converting it incorrectly.
Take the mantissa, 101 0000 0000 0000 0000 0000 and subtract 1 giving 100 1111 1111 1111 1111 1111. Then flip the bits, giving 011 0000 0000 0000 0000 0000. Move the decimal 4 places (since our exponent, 0100 is 4), giving us 0110.0000 0000 0000 0000 000. This equals 6.0, which is not .625x2^4.
I believe the actual value, should be 0011 0000 0000 0000 0000 0000 0000 01000 or 30000004 in hex.
Can anyone else confirm my suspicions that this value is labeled incorrectly in Table 3 of the 1750A format page above?
Thank you
As explained previously, the sign+mantissa is interpreted as a 2's-complement value between -1 and +1.
In your case, it's 0.101000000... (base-2). Which is 1/2 + 1/8 = 0.625 (base-10).
It all makes perfect sense.
Here:
0101 0000 0000 0000 0000 0000 0000 0100
you've got:
(0*20 + 1*2-1 + 0*2-2 + 1*2-3 + 0*2-4 + ... + 0*2-23) * 24 = (0.5 + 0.125) * 16 = 0.625 * 16 = 10
Just do the math.