Tsql & operator - sql-server

I just came across this command that I have never is seen being used before. What is the & operator doing in this line of code? It seams like ( #MyVar & 64 ) another way of writing #MyVar = 64
DECLARE #MyVar INT
SET #MyVar = 16 -- Prints yes
SET #MyVar = 64 -- Prints no
IF ( ( #MyVar & 64 ) = 0 )
BEGIN
SELECT 'yes'
END
ELSE
BEGIN
SELECT 'no'
END

This is bitwise AND. In fact, as written, what you return in the SELECT alternates between 0 and 64 and no other numbers.
This is verbatim from https://learn.microsoft.com/en-us/sql/t-sql/language-elements/bitwise-and-transact-sql :
The & bitwise operator performs a bitwise logical AND between the two
values, taking each corresponding bit for both expressions. The bits
in the result are set to 1 if and only if both bits (for the current
bit being resolved) in the input expressions have a value of 1;
otherwise, the bit in the result is set to 0.
Lets see what this does:
DECLARE #myint int = 16
SELECT #myint & 64 [myint 0] --0
/*
--This is the bitwise AND representation for 16 &64:
0000 0000 0100 0000 --&64
0000 0000 0001 0000 --#MyVar = 16
-------------------
0000 0000 0000 0000 -- = 0 = 'Yes'
*/
SET #myint = 64
SELECT #myint & 64 [myint 64] --64
/*
--This is the bitwise AND representation for 64 &64:
0000 0000 0100 0000 --&64
0000 0000 0100 0000 --#MyVar = 64
-------------------
0000 0000 0100 0000 -- = 64 = 'No'
*/
This applies for other numbers as well, try 127 and 128:
/*
0000 0000 0100 0000 --&64
0000 0000 0111 1111 --#MyVar = 127
-------------------
0000 0000 0100 0000 --64 = 'No'
0000 0000 0100 0000 --&64
0000 0000 1000 0001 --#MyVar = 128
-------------------
0000 0000 0000 0000 --0 = 'Yes'
*/
127 &64 = 64.
128 &64 = 0.

Related

How to find magic bitboards?

const int BitTable[64] = {
63, 30, 3, 32, 25, 41, 22, 33, 15, 50, 42, 13, 11, 53, 19, 34, 61, 29, 2,
51, 21, 43, 45, 10, 18, 47, 1, 54, 9, 57, 0, 35, 62, 31, 40, 4, 49, 5, 52,
26, 60, 6, 23, 44, 46, 27, 56, 16, 7, 39, 48, 24, 59, 14, 12, 55, 38, 28,
58, 20, 37, 17, 36, 8
};
int pop_1st_bit(uint64 *bb) {
uint64 b = *bb ^ (*bb - 1);
unsigned int fold = (unsigned) ((b & 0xffffffff) ^ (b >> 32));
*bb &= (*bb - 1);
return BitTable[(fold * 0x783a9b23) >> 26];
}
uint64 index_to_uint64(int index, int bits, uint64 m) {
int i, j;
uint64 result = 0ULL;
for(i = 0; i < bits; i++) {
j = pop_1st_bit(&m);
if(index & (1 << i)) result |= (1ULL << j);
}
return result;
}
It's from the Chess Programming Wiki:
https://www.chessprogramming.org/Looking_for_Magics
It's part of some code for finding magic numbers.
The argument uint64 m is a bitboard representing the possible blocked squares for either a rook or bishop move. Example for a rook on the e4 square:
0 0 0 0 0 0 0 0
0 0 0 0 1 0 0 0
0 0 0 0 1 0 0 0
0 0 0 0 1 0 0 0
0 1 1 1 0 1 1 0
0 0 0 0 1 0 0 0
0 0 0 0 1 0 0 0
0 0 0 0 0 0 0 0
The edge squares are zero because they always block, and reducing the number of bits needed is apparently helpful.
/* Bitboard, LSB to MSB, a1 through h8:
* 56 - - - - - - 63
* - - - - - - - -
* - - - - - - - -
* - - - - - - - -
* - - - - - - - -
* - - - - - - - -
* - - - - - - - -
* 0 - - - - - - 7
*/
So in the example above, index_to_uint64 takes an index (0 to 2^bits), and the number of bits set in the bitboard (10), and the bitboard.
It then pops_1st_bit for each number of bits, followed by another shifty bit of code. pops_1st_bit XORs the bitboard with itself minus one (why?). Then it ANDs it with a full 32-bits, and somewhere around here my brain runs out of RAM. Somehow the magical hex number 0x783a9b23 is involved (is that the number sequence from Lost?). And there is this ridiculous mystery array of randomly ordered numbers from 0-63 (BitTable[64]).
Alright, I have it figured out.
First, some terminology:
blocker mask: A bitboard containing all squares that can block a piece, for a given piece type and the square the piece is on. It excludes terminating edge squares because they always block.
blocker board: A bitboard containing occupied squares. It only has squares which are also in the blocker mask.
move board: A bitboard containing all squares a piece can move to, given a piece type, a square, and a blocker board. It includes terminating edge squares if the piece can move there.
Example for a rook on the e4 square, and there are some random pieces on e2, e5, e7, b4, and c4.
The blocker mask A blocker board The move board
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0
0 1 1 1 0 1 1 0 0 1 1 0 0 0 0 0 0 0 1 1 0 1 1 1
0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0
0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
Some things to note:
The blocker mask is always the same for a given square and piece type (either rook or bishop).
Blocker boards include friendly & enemy pieces, and it is a subset of the blocker mask.
The resulting move board may include moves that capture your own pieces, however these moves are easily removed afterward: moveboard &= ~friendly_pieces)
The goal of the magic numbers method is to very quickly look up a pre-calculated move board for a given blocker board. Otherwise you'd have to (slowly) calculate the move board every time. This only applies to sliding pieces, namely the rook and bishop. The queen is just a combination of the rook and bishop.
Magic numbers can be found for each square & piece type combo. To do this, you have to calculate every possible blocker board variation for each square/piece combo. This is what the code in question is doing. How it's doing it is still a bit of a mystery to me, but that also seems to be the case for the apparent original author, Matt Taylor. (Thanks to #Pradhan for the link)
So what I've done is re-implemented the code for generating all possible blocker board variations. It uses a different technique, and while it's a little slower, it's much easier to read and comprehend. The fact that it's slightly slower is not a problem, because this code isn't speed critical. The program only has to do it once at program startup, and it only takes microseconds on a dual-core i5.
/* Generate a unique blocker board, given an index (0..2^bits) and the blocker mask
* for the piece/square. Each index will give a unique blocker board. */
static uint64_t gen_blockerboard (int index, uint64_t blockermask)
{
/* Start with a blockerboard identical to the mask. */
uint64_t blockerboard = blockermask;
/* Loop through the blockermask to find the indices of all set bits. */
int8_t bitindex = 0;
for (int8_t i=0; i<64; i++) {
/* Check if the i'th bit is set in the mask (and thus a potential blocker). */
if ( blockermask & (1ULL<<i) ) {
/* Clear the i'th bit in the blockerboard if it's clear in the index at bitindex. */
if ( !(index & (1<<bitindex)) ) {
blockerboard &= ~(1ULL<<i); //Clear the bit.
}
/* Increment the bit index in the 0-4096 index, so each bit in index will correspond
* to each set bit in blockermask. */
bitindex++;
}
}
return blockerboard;
}
To use it, do something like this:
int bits = count_bits( RookBlockermask[square] );
/* Generate all (2^bits) blocker boards. */
for (int i=0; i < (1<<bits); i++) {
RookBlockerboard[square][i] = gen_blockerboard( i, RookBlockermask[square] );
}
How it works: There are 2^bits blocker boards, where bits is the number of 1's in the blocker mask, which are the only relevant bits. Also, each integer from 0 to 2^bits has a unique sequence of 1's and 0's of length bits. So this function just corresponds each bit in the given integer to a relevant bit in the blocker mask, and turns it off/on accordingly to generate a unique blocker board.
It's not as clever or fast, but it's readable.
Alright, I'm going to try to step through this.
index_to_uint64( 7, 10, m );
7 is just a randomly chosen number between 0 and 2^10, and 10 is the number of bits set in m. m can be represented in four ways:
bitboard:
0 0 0 0 0 0 0 0
0 0 0 0 1 0 0 0
0 0 0 0 1 0 0 0
0 0 0 0 1 0 0 0
0 1 1 1 0 1 1 0
0 0 0 0 1 0 0 0
0 0 0 0 1 0 0 0
0 0 0 0 0 0 0 0
dec: 4521262379438080
hex: 0x1010106e101000
bin: 0000 0000 0001 0000 0001 0000 0001 0000 0110 1110 0001 0000 0001 0000 0000 0000
Moving on. This will be called 10 times. It has a return value and it modifies m.
pop_1st_bit(&m);
In pop_1st_bit, m is referred to by bb. I'll change it to m for clarity.
uint64 b = m^(m-1);
The m-1 part takes the least significant bit that is set and flips it and all the bits below it. After the XOR, all those changed bits are now set to 1 while all the higher bits are set to 0.
m : 0000 0000 0001 0000 0001 0000 0001 0000 0110 1110 0001 0000 0001 0000 0000 0000
m-1: 0000 0000 0001 0000 0001 0000 0001 0000 0110 1110 0001 0000 0000 1111 1111 1111
b : 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0001 1111 1111 1111
Next:
unsigned int fold = (unsigned) ((b & 0xffffffff) ^ (b >> 32));
The (b & 0xffffffff) part ANDs b with lower 32 set bits. So this essentially clears any bits in the upper half of b.
(b & 0xffffffff)
b: 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0001 1111 1111 1111
&: 0000 0000 0000 0000 0000 0000 0000 0000 1111 1111 1111 1111 1111 1111 1111 1111
=: 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0001 1111 1111 1111
The ... ^ (b >> 32) part shifts the upper half of b into the lower half, then XORs it with the result of the previous operation. So it basically XORs the top half of b with the lower half of b. This has no effect in this case because the upper half of b was empty to begin with.
>> :0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000
^ :0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0001 1111 1111 1111
uint fold = 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0001 1111 1111 1111
I don't understand the point of that "folding", even if there had been bits set in the upper half of b.
Anyways, moving on. This next line actually modifies m by unsetting the lowest bit. That makes some sense.
m &= (m - 1);
m : 0000 0000 0001 0000 0001 0000 0001 0000 0110 1110 0001 0000 0001 0000 0000 0000
m-1: 0000 0000 0001 0000 0001 0000 0001 0000 0110 1110 0001 0000 0000 1111 1111 1111
& : 0000 0000 0001 0000 0001 0000 0001 0000 0110 1110 0001 0000 0000 0000 0000 0000
This next part multiplies fold by some hex number (a prime?), right shifts the product 26, and uses that as an index into BitTable, our mysterious array of randomly ordered numbers 0-63. At this point I suspect the author might be writing a pseudo random number generator.
return BitTable[(fold * 0x783a9b23) >> 26];
That concludes pop_1st_bit. That's all done 10 times (once for each bit originally set in m). Each of the 10 calls to pop_1st_bit returns a number 0-63.
j = pop_1st_bit(&m);
if(index & (1 << i)) result |= (1ULL << j);
In the above two lines, i is the current bit we are on, 0-9. So if the index number (the 7 originally passed as an argument to index_to_uint64) has the i'th bit set, then set the j'th bit in the result, where j was the 0-63 return value from pop_1st_bit.
And that's it! I'm still confused :(
When watching a video series on chess engines on youtube I had exactly the same questions as paulwal222. There seems to be some high level mathematics involved. The best links explaining the background of this difficult subject are https://chessprogramming.wikispaces.com/Matt+Taylor and https://chessprogramming.wikispaces.com/BitScan . It seems that Matt Taylor in 2003 in a google.group ( https://groups.google.com/forum/#!topic/comp.lang.asm.x86/3pVGzQGb1ys ) (also found by pradhan) came up with something that is now called Matt Taylor's folding trick, a 32-bit friendly implementation to find the bit-index of LS1B ( https://en.wikipedia.org/wiki/Find_first_set ). Taylor's folding trick apparently is an adaptation of the De Bruijn ( https://en.wikipedia.org/wiki/Nicolaas_Govert_de_Bruijn ) bitscan, devised in 1997, according to Donald Knuth by Martin Läuter to determine the LS1B index by minimal perfect hashing ( https://en.wikipedia.org/wiki/Perfect_hash_function ). The numbers of the BitTable (63, 30, ..) and the fold in PopBit (0x783a9b23) are probably the so called magic numbers (uniquely?) related to Matt Taylor's 32-bit folding trick. This folding trick seems to be very fast, because lots of engines have copied this approach (f.i Stockfish).
The goal is to find and return the position of the least significant bit (LSB) in a given integer and then set the LSB to zero, therefore if m = 1010 0100, we want the function to return the index 2 and to modify m as: m = 1010 0000. In order to see what is going on in this process, it is easiest to look at the 8-bit case and what happens in the first 2 lines of pop_1st_bit:
m m^(m-1) "fold"
XXXX XXX1 0000 0001 0001
XXXX XX10 0000 0011 0011
XXXX X100 0000 0111 0111
XXXX 1000 0000 1111 1111
XXX1 0000 0001 1111 1110
XX10 0000 0011 1111 1100
X100 0000 0111 1111 1000
1000 0000 1111 1111 0000
In the above, X means we don't care about the value of the bits in those positions. So, m^(m-1) maps every 8-bit number to one of eight 8-bit keys based on the position of its LSB. The fold operation then converts each of these 8-bit keys to a unique 4-bit key. The reason for doing this in the original (64-bit) case is to avoid a 64-bit multiplication in fold*magic by replacing it with a 32-bit multiplication.
The table for the 8-bit case contains eight values (one for each of the possible LSB positions in an 8-bit integer). Thus, we need 3 bits to index the table. The right shift is going to give us those 3 bits, the formula for calculating how many bits to shift by:
[Number of bits in key] - log2([size of table])
So, in the original case, we get: 32 - log2(64) = 26, whereas in the 8-bit case we get 4 - log2(8) = 1.
Therefore: table_index = fold*magic >> 1, will give us numbers between 0-7 which we need to index the table. All we need to do now is find magic that gives us a different value for each of our eight 4-bit keys.
In this case, the 4-bit magic number that we need is 5 (0b0101). You can find the number by brute force, by looking for magic such that fold*magic >> 1 yields a unique value for each of the eight "fold" keys. (It is worth noting here that I am assuming that any overflowing bits from the 4-bit multiplication of fold and magic disappear):
m m^(m-1) "fold" fold*5 fold*5 >> 1
XXXX XXX1 0000 0001 0001 0101 010 (dec: 2)
XXXX XX10 0000 0011 0011 1111 111 (dec: 7)
XXXX X100 0000 0111 0111 0011 001 (dec: 1)
XXXX 1000 0000 1111 1111 1011 101 (dec: 5)
XXX1 0000 0001 1111 1110 0110 011 (dec: 3)
XX10 0000 0011 1111 1100 1100 110 (dec: 6)
X100 0000 0111 1111 1000 1000 100 (dec: 4)
1000 0000 1111 1111 0000 0000 000 (dec: 0)
So when the LSB is the 0th bit, fold*5 >> 1 will be 2, so we put zero at index 2 of the array. When the LSB is the 1st bit, fold*5 >> 1 will be 7, so we put 1 at index 7 of the array. Etc, etc.
From the results above we can build the array as {7, 2, 0, 4, 6, 3, 5, 1}. The ordering of the array looks meaningless, but really it is just related to the order in which fold*5 >> 1 spits out the index values.
As to whether or not you should actually do this, I imagine that it is hardware dependent, however, my processor can do the same thing about five times faster with:
unsigned long idx;
_BitScanForward64(&idx, bb);
bb &= bb - 1;

Bit masking confusion

I get this result when I bitwise & -4227072 and 0x7fffff:
0b1111111000000000000000
These are the bit representations for the two values I'm &'ing:
-0b10000001000000000000000
0b11111111111111111111111
Shouldn't &'ing them together instead give this?
0b10000001000000000000000
Thanks.
-4227072 == 0xFFBF8000 == 1111 1111 1011 1111 1000 0000 0000
-4227072 & 0x7fffff should be
0xFFBF8000 == 1111 1111 1011 1111 1000 0000 0000 0000
& 0x7fffff == 0000 0000 0111 1111 1111 1111 1111 1111
-----------------------------------------------------
0x003F8000 == 0000 0000 0011 1111 1000 0000 0000 0000
The negative number is represented as its 2's complement inside the computer's memory. The binary representation you have posted is thus misleading. In 2's complement, the most significant digit (at bit k) has value –2k–1. The remaining digits are positive as you expect.
Assuming you are dealing with 32 bit signed integers, we have:
1111 1111 1011 1111 1000 0000 0000 0000 = −422707210
& 0000 0000 0111 1111 1111 1111 1111 1111 = 7fffff16
————————————————————————————————————————————————————————————
0000 0000 0011 1111 1000 0000 0000 0000
Which is what you got.
To verify the first line:
−1 × 231 = −214748364810
1 × 230 = 107374182410
1 × 229 = 53687091210
1 × 228 = 26843545610
1 × 227 = 13421772810
1 × 226 = 6710886410
1 × 225 = 3355443210
1 × 224 = 1677721610
1 × 223 = 838860810
1 × 221 = 209715210
1 × 220 = 104857610
1 × 219 = 52428810
1 × 218 = 26214410
1 × 217 = 13107210
1 × 216 = 6553610
1 × 215 = 3276810
——————————————————————————————
−422707210 ✓
0b10000001000000000000000 is correct - if your integer encoding was signed-magnitude.
This is possible on some early or novel machines. Another answer well explains how negative integers are typically represented as 2's complement numbers and then the result is as you observed: 0b1111111000000000000000.

MIL-STD-1750A to Decimal Conversion Examples

I am looking at some examples in the 1750A format webpage and some of the examples do not really make sense. I have included the 1750A format specification at the bottom of this post in case anyone isn't familiar with it.
Take this example from Table 3 of the 1750A format webpage:
.625x2^4 = 5000 00 04
In binary 5000 00 04 is 0101 0000 0000 0000 0000 0000 0000 0100
If you convert this to decimal, it does not equal 10, which is .625x2^4. Maybe I am converting it incorrectly.
Take the mantissa, 101 0000 0000 0000 0000 0000 and subtract 1 giving 100 1111 1111 1111 1111 1111. Then flip the bits, giving 011 0000 0000 0000 0000 0000. Move the decimal 4 places (since our exponent, 0100 is 4), giving us 0110.0000 0000 0000 0000 000. This equals 6.0, which is not .625x2^4.
I believe the actual value, should be 0011 0000 0000 0000 0000 0000 0000 01000 or 30000004 in hex.
Can anyone else confirm my suspicions that this value is labeled incorrectly in Table 3 of the 1750A format page above?
Thank you
As explained previously, the sign+mantissa is interpreted as a 2's-complement value between -1 and +1.
In your case, it's 0.101000000... (base-2). Which is 1/2 + 1/8 = 0.625 (base-10).
It all makes perfect sense.
Here:
0101 0000 0000 0000 0000 0000 0000 0100
you've got:
(0*20 + 1*2-1 + 0*2-2 + 1*2-3 + 0*2-4 + ... + 0*2-23) * 24 = (0.5 + 0.125) * 16 = 0.625 * 16 = 10
Just do the math.

n & (n-1) what does this expression do? [duplicate]

This question already has answers here:
Closed 12 years ago.
Possible Duplicates:
Query about working out whether number is a power of 2
What does this function do?
n & (n-1) - where can this expression be used ?
It's figuring out if n is either 0 or an exact power of two.
It works because a binary power of two is of the form 1000...000 and subtracting one will give you 111...111. Then, when you AND those together, you get zero, such as with:
1000 0000 0000 0000
& 111 1111 1111 1111
==== ==== ==== ====
= 0000 0000 0000 0000
Any non-power-of-two input value (other than zero) will not give you zero when you perform that operation.
For example, let's try all the 4-bit combinations:
<----- binary ---->
n n n-1 n&(n-1)
-- ---- ---- -------
0 0000 0111 0000 *
1 0001 0000 0000 *
2 0010 0001 0000 *
3 0011 0010 0010
4 0100 0011 0000 *
5 0101 0100 0100
6 0110 0101 0100
7 0111 0110 0110
8 1000 0111 0000 *
9 1001 1000 1000
10 1010 1001 1000
11 1011 1010 1010
12 1100 1011 1000
13 1101 1100 1100
14 1110 1101 1100
15 1111 1110 1110
You can see that only 0 and the powers of two (1, 2, 4 and 8) result in a 0000/false bit pattern, all others are non-zero or true.
It returns 0 if n is a power of 2 (NB: only works for n > 0). So you can test for a power of 2 like this:
bool isPowerOfTwo(int n)
{
return (n > 0) && ((n & (n - 1)) == 0);
}
It checks if n is a power of 2: What does the bitwise code "$n & ($n - 1)" do?
It's a bitwise operation between a number and its previous number. Only way this expression could ever be false is if n is a power of 2, so essentially you're verifying if it isn't a power of 2.

bitwise operations

// PWM frequency:
// 0 - 48 kHz
// 1 - 12 kHz
// 2 - 3 kHz
enum { MOTOR_FREQUENCY = 1 };
// Configure Timer 2 w. 250x period.
T2CON = 1 << 2 | MOTOR_FREQUENCY /* << 0 */;
Have i understood this right?
11111111 Arithmetic left-shift-by-two of 0 or 1 or 2
Means:
T2CON = 1 << 2 | 0 = 1111 1100
T2CON = 1 << 2 | 1 = 1111 1000
T2CON = 1 << 2 | 2 = 1111 0000
Kind Regards, Sonite
1 << 2 = 100b
So with the OR:
100b | 1 = 101b
100b | 2 = 110b
Assuming that you are playing with a microcontroller with 8-bit registers .
0000 0001 << 2 = 0000 0100
then
0000 0100 OR 0000 0000 = 0000 0100
-----
0000 0001 << 2 = 0000 0100
then
0000 0100 OR 0000 0001 = 0000 0101
-----
0000 0001 << 2 = 0000 0100
then
0000 0100 OR 0000 0010 = 0000 0110
Context:
TCON2 is a timer register on PIC MCUs, where the last two bits configure the prescaler.
T2CKPS[1:0] = 0b00 = 0 => /1 prescaler
T2CKPS[1:0] = 0b01 = 1 => /4 prescaler
T2CKPS[1:0] = 0b1x = 2 or 3 => /16 prescaler
Bit 2 actually switches the timer on, so it always needs to be set to do anything, hence the 1 << 2 (which really should be written as 1 << T2CON_TMR2ON_bit with T2CON_TMR2ON_bit being defined in some CPU-configuration header)
All said and done, the three settings are 0b100, 0b101, and 0b110, which turn on the timer, and tweak the prescaler to get those frequencies mentioned in the comments.
Also, using an enum with one element is just about pointless; use #define.

Resources