Bitwise bit shifting and why this solution works - arrays

I've been doing code fights on codefights.com and I came upon this problem below. I have figured out the problem on my own, but when i researched other peoples solutions I found one that was much shorter than mine, but I cant seem to understand why they did what they did.
The question goes:
You are given an array of up to four non-negative integers, each less than 256.
Your task is to pack these integers into one number M in the following way:
The first element of the array occupies the first 8 bits of M;
The second element occupies next 8 bits, and so on.
Return the obtained integer M.
Note: the phrase "first bits of M" refers to the least significant bits of M - the right-most bits of an integer. For further clarification see the following example.
Example
For a = [24, 85, 0], the output should be
arrayPacking(a) = 21784.
An array [24, 85, 0] looks like [00011000, 01010101, 00000000] in binary.
After packing these into one number we get 00000000 01010101 00011000 (spaces are placed for convenience), which equals to 21784.
Their answer was:
func arrayPacking(a []int) (sum int) {
for i := range a {
sum += a[len(a) - i - 1] << uint((len(a) - i - 1) * 8)
}
return
}
How is this code returning the right amount of shift just by using 0, 8, 16, etc. intervals? I've been researching bitwise a lot lately, but I still can't seem to reason why this works.

First, write the solution in Go. We convert little-endian, base-256 digits to a base-2 (binary) number. Shifting left by 8 bits multiplies by 256.
package main
import (
"fmt"
)
func pack(digits []int) (number int) {
// digits are little-endian, base-256 (1 << 8 = 256)
for i, digit := range digits {
number += digit << uint(i*8)
}
return number
}
func main() {
a := []int{24, 85, 0}
m := pack(a)
fmt.Println(m)
}
Playground: https://play.golang.org/p/oo_n7CiAHwG
Output:
21784
Now you should be able to figure out their ugly looking answer:
func arrayPacking(a []int) (sum int) {
for i := range a {
sum += a[len(a) - i - 1] << uint((len(a) - i - 1) * 8)
}
return
}

the bitshifting by multiples of 8 is the same as muliplying by multiples of 256, e.g. x << 0*8 == x * 256⁰, x << 1*8 == x * 256¹, x << 2*8 == x * 256² etc., so the code can be rewritten like this, using math.Pow:
func arrayPacking(a []int) (sum int) {
for i := range a {
sum += a[len(a) - i - 1] * int(math.Pow(256, (len(a) - i - 1)))
}
return
}
Or is your question why this sort of packing works?

Related

Swift Bitshift Integers in Array to the Right

I've found a similar example in Python.
Essentially, say you have an array which is [1, 2, 3], and in binary that would be [0b00000001, 0b00000010, 0b00000011], what's the fastest way to bitshift that into [0b00000000, 0b10000001, 0b00000001]?
Basically bitshifting an array as if it were one huge int. (The left-hand side can always be assumed to be 0.)
Unfortunately I haven't got any code, because I have no clue how to achieve this, sorry!
I think you could do it something like this:
func bitShift(array: [UInt8]) -> [UInt8] {
var output = array
var prevParity = false
for i in 0..<output.count {
// check parity of current value and store it
let tempParity = (output[i] & 1) == 1
// bitshift the current value to the right
output[i] = output[i] >> 1
if prevParity {
// add on the first one if the previous value was odd
output[i] = output[i] | 0b10000000
}
// store tempParity into prevParity for next loop
prevParity = tempParity
}
return output
}
Advanced operators... https://docs.swift.org/swift-book/LanguageGuide/AdvancedOperators.html
(I do not know Swift!)
var bignum = [1, 2, 3]
var carry = 0
let signBit = 1 << 31 // or 1 << 63 or int.min
for i in 00..<bignum.count {
var n = bignum[i]
let nextCarry = (n & 1) == 1 ? signBit : 0
n = (n >> 1) & ~signBit
if carry {
n = n | carry
}
bignum[i] = n
carry = nextCarry;
}
The sign bit must be set when the prior number was odd (ended with bit 1).
A shift-right (logical shift right, >>> in java) must be undone from a persisting sign, as >> is an arithmetical shift right, conserving the sign.
The sign bit is the highest bit, but I believe Swift has both 32 and 64 bit ints.
There might be some constant, like java's Integer.MIN_VALUE (int.min?). One could also use a signed int and shift the signBit to the right until it becomes negative.
In general UInt64 would be best to use (>> then shifting in a bit 0).

How to get position of right most set bit in C

int a = 12;
for eg: binary of 12 is 1100 so answer should be 3 as 3rd bit from right is set.
I want the position of the last most set bit of a. Can anyone tell me how can I do so.
NOTE : I want position only, here I don't want to set or reset the bit. So it is not duplicate of any question on stackoverflow.
This answer Unset the rightmost set bit tells both how to get and unset rightmost set bit for an unsigned integer or signed integer represented as two's complement.
get rightmost set bit,
x & -x
// or
x & (~x + 1)
unset rightmost set bit,
x &= x - 1
// or
x -= x & -x // rhs is rightmost set bit
why it works
x: leading bits 1 all 0
~x: reversed leading bits 0 all 1
~x + 1 or -x: reversed leading bits 1 all 0
x & -x: all 0 1 all 0
eg, let x = 112, and choose 8-bit for simplicity, though the idea is same for all size of integer.
// example for get rightmost set bit
x: 01110000
~x: 10001111
-x or ~x + 1: 10010000
x & -x: 00010000
// example for unset rightmost set bit
x: 01110000
x-1: 01101111
x & (x-1): 01100000
Finding the (0-based) index of the least significant set bit is equivalent to counting how many trailing zeros a given integer has. Depending on your compiler there are builtin functions for this, for example gcc and clang support __builtin_ctz.
For MSVC you would need to implement your own version, this answer to a different question shows a solution making use of MSVC intrinsics.
Given that you are looking for the 1-based index, you simply need to add 1 to ctz's result in order to achieve what you want.
int a = 12;
int least_bit = __builtin_ctz(a) + 1; // least_bit = 3
Note that this operation is undefined if a == 0. Furthermore there exist __builtin_ctzl and __builtin_ctzll which you should use if you are working with long and long long instead of int.
One can use the property of 2s-complement here.
Fastest way to find 2s-complement of a number is to get the rightmost set bit and flip everything to the left of it.
For example: consider a 4 bit system
/* Number in binary */
4 = 0100
/* 2s complement of 4 */
complement = 1100
/* which nothing but */
complement == -4
/* Result */
4 & (-4) = 0100
Notice that there is only one set bit and its at rightmost set bit of 4.
Similarly we can generalise this for n.
n&(-n) will contain only one set bit which is actually at the rightmost set bit position of n.
Since there is only one set bit in n&(-n), it is a power of 2.
So finally we can get the bit position by:
log2(n&(-n))+1
The leftmost bit of n can be obtained using the formulae:
n & ~(n-1)
This works because when you calculate (n-1) .. you are actually making all the zeros till the rightmost bit to 1, and the rightmost bit to 0.
Then you take a NOT of it .. which leaves you with the following:
x= ~(bits from the original number) + (rightmost 1 bit) + trailing zeros
Now, if you do (n & x), you get what you need, as the only bit that is 1 in both n and x is the rightmost bit.
Phewwwww .. :sweat_smile:
http://www.catonmat.net/blog/low-level-bit-hacks-you-absolutely-must-know/
helped me understand this.
There is a neat trick in Knuth 7.1.3 where you multiply by a "magic" number (found by a brute-force search) that maps the first few bits of the number to a unique value for each position of the rightmost bit, and then you can use a small lookup table. Here is an implementation of that trick for 32-bit values, adapted from the nlopt library (MIT/expat licensed).
/* Return position (0, 1, ...) of rightmost (least-significant) one bit in n.
*
* This code uses a 32-bit version of algorithm to find the rightmost
* one bit in Knuth, _The Art of Computer Programming_, volume 4A
* (draft fascicle), section 7.1.3, "Bitwise tricks and
* techniques."
*
* Assumes n has a 1 bit, i.e. n != 0
*
*/
static unsigned rightone32(uint32_t n)
{
const uint32_t a = 0x05f66a47; /* magic number, found by brute force */
static const unsigned decode[32] = { 0, 1, 2, 26, 23, 3, 15, 27, 24, 21, 19, 4, 12, 16, 28, 6, 31, 25, 22, 14, 20, 18, 11, 5, 30, 13, 17, 10, 29, 9, 8, 7 };
n = a * (n & (-n));
return decode[n >> 27];
}
Try this
int set_bit = n ^ (n&(n-1));
Explanation:
As noted in this answer, n&(n-1) unsets the last set bit.
So, if we unset the last set bit and xor it with the number; by the nature of the xor operation, the last set bit will become 1 and the rest of the bits will return 0
1- Subtract 1 form number: (a-1)
2- Take it's negation : ~(a-1)
3- Take 'AND' operation with original number:
int last_set_bit = a & ~(a-1)
The reason behind subtraction is, when you take negation it set its last bit 1, so when take 'AND' it gives last set bit.
Check if a & 1 is 0. If so, shift right by one until it's not zero. The number of times you shift is how many bits from the right is the rightmost bit that is set.
You can find the position of rightmost set bit by doing bitwise xor of n and (n&(n-1) )
int pos = n ^ (n&(n-1));
I inherited this one, with a note that it came from HAKMEM (try it out here). It works on both signed and unsigned integers, logical or arithmetic right shift. It's also pretty efficient.
#include <stdio.h>
int rightmost1(int n) {
int pos, temp;
for (pos = 0, temp = ~n & (n - 1); temp > 0; temp >>= 1, ++pos);
return pos;
}
int main()
{
int pos = rightmost1(16);
printf("%d", pos);
}
You must check all 32 bits starting at index 0 and working your way to the left. If you can bitwise-and your a with a one bit at that position and get a non-zero value back, it means the bit is set.
#include <limits.h>
int last_set_pos(int a) {
for (int i = 0; i < sizeof a * CHAR_BIT; ++i) {
if (a & (0x1 << i)) return i;
}
return -1; // a == 0
}
On typical systems int will be 32 bits, but doing sizeof a * CHAR_BIT will get you the right number of bits in a even if it's a different size
Accourding to dbush's solution, Try this:
int rightMostSet(int a){
if (!a) return -1; //means there isn't any 1-bit
int i=0;
while(a&1==0){
i++;
a>>1;
}
return i;
}
return log2(((num-1)^num)+1);
explanation with example: 12 - 1100
num-1 = 11 = 1011
num^ (num-1) = 12^11 = 7 (111)
num^ (num-1))+1 = 8 (1000)
log2(1000) = 3 (answer).
x & ~(x-1) isolates the lowest bit that is one.
int main(int argc, char **argv)
{
int setbit;
unsigned long d;
unsigned long n1;
unsigned long n = 0xFFF7;
double nlog2 = log(2);
while(n)
{
n1 = (unsigned long)n & (unsigned long)(n -1);
d = n - n1;
n = n1;
setbit = log(d) / nlog2;
printf("Set bit: %d\n", setbit);
}
return 0;
}
And the result is as below.
Set bit: 0
Set bit: 1
Set bit: 2
Set bit: 4
Set bit: 5
Set bit: 6
Set bit: 7
Set bit: 8
Set bit: 9
Set bit: 10
Set bit: 11
Set bit: 12
Set bit: 13
Set bit: 14
Set bit: 15
Let x be your integer input.
Bitwise AND by 1.
If it's even ie 0, 0&1 returns you 0.
If it's odd ie 1, 1&1 returns you 1.
if ( (x & 1) == 0) )
{
std::cout << "The rightmost bit is 0 ie even \n";
}
else
{
std::cout<< "The rightmost bit is 1 ie odd \n";
}```
Alright, so number systems is just working with logarithms and exponents. So I'll dive down into an approach that really makes sense to me.
I would prefer you read this because I write there about how I interpret logarithms as.
When you perform the x & -x operation, it gives you the value which has the right most bit as 1 (for example, it can be 0001000 or 0000010. Now according to how I interpret logarithms as, this value of the right most set bit, is the final value after I grow at the rate of 2. Now we are interested in finding the number of digits in this answer because whatever that is, if you subtract 1 from it, that is precisely the bit-count of set bit (bit count begins with 0 here and the digit count begins with 1, so yeah). But the number of digits is precisely the time you expanded for + 1 (in accordance with my logic) or just the formula I mentioned in the previous link. But now, as we don't really need the digits, but need the bit count, and we also don't have to worry about values of bits which potentially can be real (if the number is 65) because the number is always some multiple of 2 (except 1). So if you just take the logarithm of the value x & -x, we get the bit count! I did see an answer before that mentioned this, but diving down to why it really works was something I felt like writing down.
P.S: You could also count the number of digits and then subtract 1 from it to get the bit-count.

Trying to understand an inline function

I am studying the following function:
inline xint dtally(xint x)
{
xint t = 0;
while (x) t += 1 << ((x % 10) * 6), x /= 10;
return t;
}
I just want to know what makes this feature i.e. which computes and stored in the variable t.
This counts the number of base 10 digits in the number x in t, separated by 6-width bit fields.
Note that each shift length is a multiple of 6. So if a digit is 0 the shift is 0, if the digit is 1 the shift is 6, if the digit is 9 the shift is 54, and so forth.
The reason 6 is used I think is so it fits under 64 bits.

Project Euler Problem 10 - Efficient Algorithm

I attempted Project Euler's problem 10 using the very easy algorithm and the running time looks like hours. So I googled for an efficient algorithm and found this by Shlomif Fish.
The code is reproduced below:
int main(int argc, char * argv[])
{
int p, i;
int mark_limit;
long long sum = 0;
memset(bitmask, '\0', sizeof(bitmask));
mark_limit = (int)sqrt(limit);
for (p=2 ; p <= mark_limit ; p++)
{
if (! ( bitmask[p>>3]&(1 << (p&(8-1))) ) )
{
/* It is a prime. */
sum += p;
for (i=p*p;i<=limit;i+=p)
{
bitmask[i>>3] |= (1 << (i&(8-1)));
}
}
}
for (; p <= limit; p++)
{
if (! ( bitmask[p>>3]&(1 << (p&(8-1))) ) )
{
sum += p;
}
}
I have problems understanding the code. Specifically, how does this bit shifting code able to determine whether a number is prime or not.
if (! ( bitmask[p>>3]&(1 << (p&(8-1))) ) )
{
/* It is a prime. */
sum += p;
for (i=p*p;i<=limit;i+=p)
{
bitmask[i>>3] |= (1 << (i&(8-1)));
}
}
Can someone please explain this code block to me, especially this part ( bitmask[p>>3]&(1 << (p&(8-1)? Thank you very much.
The code is a modified Sieve of Eratosthenes. He is packing one number into one bit: 0 = prime, 1 = composite. The bit shifting is to get to the correct bit in the byte array.
bitmask[p>>3]
is equivalent to
bitmask[p / 8]
which selects the correct byte in the bitmask[] array.
(p&(8-1))
equals p & 7, which selects the lower 3 bits of p. This is equivalent to p % 8
Overall we are selecting bit (p % 8) of byte bitmask[p / 8]. That is we are selecting the bit in the bitmask[] array which represents the number p.
The 1 << (p % 8) sets up a 1 bit correctly located in a byte. This is then AND'ed with the bitmask[p / 8] byte to see if that particular bit is set or not, thus checking whether p is a prime number.
The overall statement equates to if (isPrime(p)), using the already completed part of the sieve to help extend the sieve.
The bitmask is acting as an array of bits. Since you can't address bits individually, you first have to access the byte and then modify a bit within it. Shifting right by 3 is the same as dividing by 8, which puts you on the right byte. The one is then shifted into place by the remainder.
x>>3 is equivalent to x/8
x&(8-1) is equivalent to x%8
But on some older systems, the bit manipulations may have been faster.
The line sets the ith bit, where i has been determined not to be prime because it is a multiple of another prime number number:
bitmask[i>>3] |= (1 << (i&(8-1)));
This line checks that the pth bit is not set, which means it is prime, since if it wasn't prime it would have been set by the line above.
if (! ( bitmask[p>>3]&(1 << (p&(8-1))) ) )

Is there an easy way to get which power of two a number is?

If I have a number that I am certain is a power of two, is there a way to get which power of two the number is? I have thought of this idea:
Having a count and shifting the number right by 1 and incrementing the count until the number is 0. Is there another way, though? Without keeping a counter?
Edit:
Here are some examples:
8 -> returns 3
16 -> returns 4
32 -> returns 5
The most elegant method is De Bruijn sequences. Here's a previous answer I gave to a similar question on how to use them to solve the problem:
Bit twiddling: which bit is set?
An often-more-practical approach is using your cpu's built-in instruction for finding the first/last bit set.
You could use the log function in cmath...
double exponent = log(number)/log(2.0);
...and then cast it to an int afterwards.
If that number is called x, you can find it by computing log2f(x). The return value is a float.
You will need to include <math.h> in order to use log2f.
That method certainly would work. Another possible way would be to eliminate half of the possibilities every time. Say you have an 8 bit number: 00010000
Bitwise and your number where half of the bits are one, and the other half is zero, say 00001111.
00010000 & 00001111 = 00000000
Now you know it's not in the first four bits. Do this repeatedly, until you don't get 0:
00010000 & 00110000 = 00010000
And than narrow it down to one possible bit which is 1, which is your power of two.
Use binary search instead of linear:
public void binarySearch() throws Exception {
int num = 0x40000;
int k = 0;
int shift = 16; // half of the size of the type (for in 16, etc)
int a = 0xffff; // lower half should be f's
while (shift != 0) {
if ((num & a) == 0) {
num = num >>> shift;
k += shift;
shift >>= 1;
} else {
shift >>= 1;
}
a = a >>> shift;
}
System.out.println(k);
}
If you're doing a for loop like I am, one method is to power the loop counter before comparison:
for (var i = 1; Math.pow(2, i) <= 1048576; i++) {
// iterates every power of two until 2^20
}

Resources