I have following byte slice which from which i need to extract bits and place them in a []int as i intend to fetch individual bit values later. I am having a hard time figuring out how to do that.
below is my code
data := []byte{3 255}//binary representation is for 3 and 255 is 00000011 11111111
what i need is a slice of bits -- > [0,0,0,0,0,0,1,1,1,1,1,1,1,1,1,1]
What i tried
I tried converting byte slice to Uint16 with BigEndian and then tried to use strconv.FormatUint but that fails with error panic: runtime error: index out of range
Saw many examples that simple output bit representation of number using fmt.Printf function but that is not useful for me as i need a int slice for further bit value access.
Do i need to use bit shift operators here ? Any help will be greatly appreciated.
One way is to loop over the bytes, and use a 2nd loop to shift the byte values bit-by-bit and test for the bits with a bitmask. And add the result to the output slice.
Here's an implementation of it:
func bits(bs []byte) []int {
r := make([]int, len(bs)*8)
for i, b := range bs {
for j := 0; j < 8; j++ {
r[i*8+j] = int(b >> uint(7-j) & 0x01)
}
}
return r
}
Testing it:
fmt.Println(bits([]byte{3, 255}))
Output (try it on the Go Playground):
[0 0 0 0 0 0 1 1 1 1 1 1 1 1 1 1]
Using the bits package provides a fairly straightforward solution.
func bitsToBits(data []byte) (st []int) {
st = make([]int, len(data)*8) // Performance x 2 as no append occurs.
for i, d := range data {
for j := 0; j < 8; j++ {
if bits.LeadingZeros8(d) == 0 {
// No leading 0 means that it is a 1
st[i*8+j] = 1
} else {
st[i*8+j] = 0
}
d = d << 1
}
}
return
}
Performance is comparable to similar solutions.
Related
I've found a similar example in Python.
Essentially, say you have an array which is [1, 2, 3], and in binary that would be [0b00000001, 0b00000010, 0b00000011], what's the fastest way to bitshift that into [0b00000000, 0b10000001, 0b00000001]?
Basically bitshifting an array as if it were one huge int. (The left-hand side can always be assumed to be 0.)
Unfortunately I haven't got any code, because I have no clue how to achieve this, sorry!
I think you could do it something like this:
func bitShift(array: [UInt8]) -> [UInt8] {
var output = array
var prevParity = false
for i in 0..<output.count {
// check parity of current value and store it
let tempParity = (output[i] & 1) == 1
// bitshift the current value to the right
output[i] = output[i] >> 1
if prevParity {
// add on the first one if the previous value was odd
output[i] = output[i] | 0b10000000
}
// store tempParity into prevParity for next loop
prevParity = tempParity
}
return output
}
Advanced operators... https://docs.swift.org/swift-book/LanguageGuide/AdvancedOperators.html
(I do not know Swift!)
var bignum = [1, 2, 3]
var carry = 0
let signBit = 1 << 31 // or 1 << 63 or int.min
for i in 00..<bignum.count {
var n = bignum[i]
let nextCarry = (n & 1) == 1 ? signBit : 0
n = (n >> 1) & ~signBit
if carry {
n = n | carry
}
bignum[i] = n
carry = nextCarry;
}
The sign bit must be set when the prior number was odd (ended with bit 1).
A shift-right (logical shift right, >>> in java) must be undone from a persisting sign, as >> is an arithmetical shift right, conserving the sign.
The sign bit is the highest bit, but I believe Swift has both 32 and 64 bit ints.
There might be some constant, like java's Integer.MIN_VALUE (int.min?). One could also use a signed int and shift the signBit to the right until it becomes negative.
In general UInt64 would be best to use (>> then shifting in a bit 0).
Each byte slice represents a key,
I want to iterate from the lower key to the Upper Key
pkg
https://golang.org/pkg/bytes/
Suppose there are two byte slices
lower :=[]byte
upper :=[]byte
how do I do this?
for i:=lower;i<upper;i++ {
}
Example
lower:= []byte{0,0,0,0,0,0} // can be thought as 0 in decimal
upper:= []byte{0,0,0,0,0,255} // can be thought as 255 in decimal
//iterate over all numbers in between lower and upper
// {0,0,0,0,0,0} {0,0,0,0,0,1} ... {0,0,0,0,0,2} ..{0,0,0,0,0,255}
for i:=0; i<=255;i++{
}
//instead of converting to decimal iterate using byte arrays
Alternatively,
How can I divide the range of the byte array (upper-lower) into smaller ranges
\\eg
l := []byte{0,1}
r := []byte{1,255}
break this into smaller ranges
l := []byte{0 , 1}
l2:= []byte{x1,y1}
...
r:= []byte{1,255}
The easiest way to do this is to simply interpret the bytes as a big-endian integer. Since there is no int48 type in Go (i.e. a six bytes large integer), you have to first extend the byte slices with leading zeros until it fits into the next largest type, int64 or uint64. Then interate with a standard for loop and reverse the decoding for each iteration:
package main
import (
"encoding/binary"
"fmt"
)
func main() {
lower := []byte{0, 0, 0, 0, 0, 0}
lowerInt := binary.BigEndian.Uint64(append([]byte{0, 0}, lower...))
upper := []byte{0, 0, 0, 0, 0, 255}
upperInt := binary.BigEndian.Uint64(append([]byte{0, 0}, upper...))
buf := make([]byte, 8)
for i := lowerInt; i <= upperInt; i++ {
binary.BigEndian.PutUint64(buf, i)
fmt.Println(buf[2:])
}
}
// Output:
// [0 0 0 0 0 0]
// [0 0 0 0 0 1]
// ...
// [0 0 0 0 0 254]
// [0 0 0 0 0 255]
Try it on the Playground: https://play.golang.org/p/86iN0V47nZi
I've been doing code fights on codefights.com and I came upon this problem below. I have figured out the problem on my own, but when i researched other peoples solutions I found one that was much shorter than mine, but I cant seem to understand why they did what they did.
The question goes:
You are given an array of up to four non-negative integers, each less than 256.
Your task is to pack these integers into one number M in the following way:
The first element of the array occupies the first 8 bits of M;
The second element occupies next 8 bits, and so on.
Return the obtained integer M.
Note: the phrase "first bits of M" refers to the least significant bits of M - the right-most bits of an integer. For further clarification see the following example.
Example
For a = [24, 85, 0], the output should be
arrayPacking(a) = 21784.
An array [24, 85, 0] looks like [00011000, 01010101, 00000000] in binary.
After packing these into one number we get 00000000 01010101 00011000 (spaces are placed for convenience), which equals to 21784.
Their answer was:
func arrayPacking(a []int) (sum int) {
for i := range a {
sum += a[len(a) - i - 1] << uint((len(a) - i - 1) * 8)
}
return
}
How is this code returning the right amount of shift just by using 0, 8, 16, etc. intervals? I've been researching bitwise a lot lately, but I still can't seem to reason why this works.
First, write the solution in Go. We convert little-endian, base-256 digits to a base-2 (binary) number. Shifting left by 8 bits multiplies by 256.
package main
import (
"fmt"
)
func pack(digits []int) (number int) {
// digits are little-endian, base-256 (1 << 8 = 256)
for i, digit := range digits {
number += digit << uint(i*8)
}
return number
}
func main() {
a := []int{24, 85, 0}
m := pack(a)
fmt.Println(m)
}
Playground: https://play.golang.org/p/oo_n7CiAHwG
Output:
21784
Now you should be able to figure out their ugly looking answer:
func arrayPacking(a []int) (sum int) {
for i := range a {
sum += a[len(a) - i - 1] << uint((len(a) - i - 1) * 8)
}
return
}
the bitshifting by multiples of 8 is the same as muliplying by multiples of 256, e.g. x << 0*8 == x * 256⁰, x << 1*8 == x * 256¹, x << 2*8 == x * 256² etc., so the code can be rewritten like this, using math.Pow:
func arrayPacking(a []int) (sum int) {
for i := range a {
sum += a[len(a) - i - 1] * int(math.Pow(256, (len(a) - i - 1)))
}
return
}
Or is your question why this sort of packing works?
So I have the following Int Array with 5 values
int guess[5]= { 1 , 4 , 7 , 3 , 0 } // Testguess
What I want to have in the end is a uint16, or actually 2 uint8 values representing those values in binary. The LSB (bit 15) is used for a flag which is irrelevant for now. In the end I want to transfer this message via a socket.
So with the values stated above I want to get the following message:
(I will seperate the different values with spaces to make it more clear)
X 000 011 111 100 001
Note that the values are from right to left.
The X is a flag, which is basically a xor with all bits, but I think I can figure this out for myself.
I am pretty new to C and want to know good solutions for the following tasks.
How to convert the values of the int Array into binaries?
How to write those binary values into the uint16?
How to split this result uint16 into 2 uint8s?
I am pretty much failing finding a good approach to this. I was thinking about using shifts with the range of 3 for the writing, but I am not sure how to get those binaries, maybe doing & 0x2 or something?
Like I said I am still pretty new to C and am happy for every help.
Bit shift in each guess[].
int guess[5]= { 1 , 4 , 7 , 3 , 0 } // Testguess
uint16_t y = 0;
unsigned i;
for (i = 0; i < 15; i += 3) {
y |= (guess[i] & 7) << i;
}
if (tbd()) { // set the X bit
y |= 0x8000;
}
uint8_t lsbyte = (uint8_t) y;
uint8_t msbyte = (uint8_t) (y >> 8);
Use some bit shifting:
int guess[5]= { 1 , 4 , 7 , 3 , 0 } // Testguess
int output = 0;
for (int i = 0; i < 5; ++i)
{
output |= (guess[i] & 0x7) << (i * 3);
}
Using pointers of the desired type should work fine.
for(int i = 0; i < 5; i++) {
uint8_t* x = (void*) guess[i];
}
But as someone else stated earlier since your goal is to obtain the binary representation for use in networking you may want to use appropriate functions for the task.
Some architectures are little endians others are big endians while the network standard is just one.
If I have a number that I am certain is a power of two, is there a way to get which power of two the number is? I have thought of this idea:
Having a count and shifting the number right by 1 and incrementing the count until the number is 0. Is there another way, though? Without keeping a counter?
Edit:
Here are some examples:
8 -> returns 3
16 -> returns 4
32 -> returns 5
The most elegant method is De Bruijn sequences. Here's a previous answer I gave to a similar question on how to use them to solve the problem:
Bit twiddling: which bit is set?
An often-more-practical approach is using your cpu's built-in instruction for finding the first/last bit set.
You could use the log function in cmath...
double exponent = log(number)/log(2.0);
...and then cast it to an int afterwards.
If that number is called x, you can find it by computing log2f(x). The return value is a float.
You will need to include <math.h> in order to use log2f.
That method certainly would work. Another possible way would be to eliminate half of the possibilities every time. Say you have an 8 bit number: 00010000
Bitwise and your number where half of the bits are one, and the other half is zero, say 00001111.
00010000 & 00001111 = 00000000
Now you know it's not in the first four bits. Do this repeatedly, until you don't get 0:
00010000 & 00110000 = 00010000
And than narrow it down to one possible bit which is 1, which is your power of two.
Use binary search instead of linear:
public void binarySearch() throws Exception {
int num = 0x40000;
int k = 0;
int shift = 16; // half of the size of the type (for in 16, etc)
int a = 0xffff; // lower half should be f's
while (shift != 0) {
if ((num & a) == 0) {
num = num >>> shift;
k += shift;
shift >>= 1;
} else {
shift >>= 1;
}
a = a >>> shift;
}
System.out.println(k);
}
If you're doing a for loop like I am, one method is to power the loop counter before comparison:
for (var i = 1; Math.pow(2, i) <= 1048576; i++) {
// iterates every power of two until 2^20
}