How to iterate over two byte slices - arrays

Each byte slice represents a key,
I want to iterate from the lower key to the Upper Key
pkg
https://golang.org/pkg/bytes/
Suppose there are two byte slices
lower :=[]byte
upper :=[]byte
how do I do this?
for i:=lower;i<upper;i++ {
}
Example
lower:= []byte{0,0,0,0,0,0} // can be thought as 0 in decimal
upper:= []byte{0,0,0,0,0,255} // can be thought as 255 in decimal
//iterate over all numbers in between lower and upper
// {0,0,0,0,0,0} {0,0,0,0,0,1} ... {0,0,0,0,0,2} ..{0,0,0,0,0,255}
for i:=0; i<=255;i++{
}
//instead of converting to decimal iterate using byte arrays
Alternatively,
How can I divide the range of the byte array (upper-lower) into smaller ranges
\\eg
l := []byte{0,1}
r := []byte{1,255}
break this into smaller ranges
l := []byte{0 , 1}
l2:= []byte{x1,y1}
...
r:= []byte{1,255}

The easiest way to do this is to simply interpret the bytes as a big-endian integer. Since there is no int48 type in Go (i.e. a six bytes large integer), you have to first extend the byte slices with leading zeros until it fits into the next largest type, int64 or uint64. Then interate with a standard for loop and reverse the decoding for each iteration:
package main
import (
"encoding/binary"
"fmt"
)
func main() {
lower := []byte{0, 0, 0, 0, 0, 0}
lowerInt := binary.BigEndian.Uint64(append([]byte{0, 0}, lower...))
upper := []byte{0, 0, 0, 0, 0, 255}
upperInt := binary.BigEndian.Uint64(append([]byte{0, 0}, upper...))
buf := make([]byte, 8)
for i := lowerInt; i <= upperInt; i++ {
binary.BigEndian.PutUint64(buf, i)
fmt.Println(buf[2:])
}
}
// Output:
// [0 0 0 0 0 0]
// [0 0 0 0 0 1]
// ...
// [0 0 0 0 0 254]
// [0 0 0 0 0 255]
Try it on the Playground: https://play.golang.org/p/86iN0V47nZi

Related

CAPL: int to byte and binary arrays

I'm trying to understand how binary arrays work. Here is a CAPL example that converts a decimal number into a binary array:
byte binaryArray[16];
binary ( int number )
{
int index;
index = 0;
for ( ; number != 0; )
{
binaryArray[index++] = number % 10;
number = number / 10;
}
}
If the input is 1234, the output is apparently 11010010
If I'm not mistaken, the for loop runs 4 times:
1234 mod 10 -> 4
123 mod 10 -> 3
12 mod 10 -> 2
1 mod 10 -> 1
If we weren't dealing with a binary array, it would look like this: { 4, 3, 2, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 }. But it is a binary array, and the "conversion" should happen here: binaryArray[index++] = number % 10; (number is a 16-bit signed integer, binaryArray is a 8-bit unsigned byte).
So how does one convert (by hand) an int to a byte?
I find it very hard to "guess" what your actual intent is and the given example is not really fitting to what I think you want to do. But I will give it a try.
As you have already explained by yourself, your example code always cuts the least significant digit in a base-10 system (i.e. the ones place) of the given integer "number" and then stores it sequentially into a byte array.
If the input is 1234, the output is apparently 11010010
This statement is false. Currently, if the input for your given function is 1234, the "output" (i.e. binaryArray contents) is
binaryArray = { 4, 3, 2, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 }
Furthermore, the actual binary representation (assuming "MSB0 first/left" and big-endian) of your input number as byte array is
{ 0b00000100, 0b11010010 }
Because of your "apparently" (wrong) statement and your final question, I guess what you really want to achieve and what you're actually asking for is: serialization of an integer into a byte array - which first seems quite simple, but actually there are some pitfalls you can run into for multi-byte values - especially when you work together with others (e.g. endianness and bit-ordering).
Assuming you have a 16-bit integer, you could store the first 8-bits (byte) in binaryArray[0] and then shift your input integer by 8 bits to the right (since you already stored those). Now you can finally store the remaining 8 bits into binaryArray[1].
Given your example input of 1234 you will end up with the array
binaryArray = { 210, 4, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 }
which is equivalent to its "binary" representation:
binaryArray = { 0b11010010, 0b00000100, 0b00000000, 0b00000000, 0b00000000, 0b00000000, 0b00000000, 0b00000000, 0b00000000, 0b00000000, 0b00000000, 0b00000000, 0b00000000, 0b00000000, 0b00000000 }
Notice that this time the byte-order (i.e. endianness) is reversed (little-endian) since we fill the array "bottom up" and the inputs "binary values" are read "right-to-left".
But since you have this 16-cells byte array you might instead want to convert the integer "number" into an array, representing its binary format, e.g. binaryArray = { 0, 0, 0, 0, 0, 1, 0, 0, 1, 1, 0, 1, 0, 0, 1, 0 }.
You could achieve this easily by substituting the "modulo 10" / "divide by 10" with "modulo 2" / "divide by 2" in your code - at least for unsigned integers. For signed integers, it should also work that way™ but you may get negative values for the modulo (and of course for the division by 2, since the dividend is non-positive and the divisor is positive).
So to make this work without thinking about whether a number is signed or not, just grab one single bit after the other and right-sift the input until it is 0, while decrementing your arrays index, i.e. filling it "top-down" (since MSB0 first / left).
byte binaryArray[16];
binary ( int number )
{
int index;
index = 15;
for ( ; number != 0; )
{
binaryArray[index--] = number & 0x01;
number = number >> 1;
}
}
Side note: runtime (assuming one instruction as single effort) is the same as "modulo/divide by 2", since right-shifting by one equals dividing by 2. Actually, it is even a bit better, since binary-and (&) is "cheaper" than modulo (%).
But keep track of bit-ordering and endianness for this kind of conversions.

How to convert NSDecimalNumber to byte array if NSDecimalNumber is bigger than Uint64?

I want to convert an NSDecimalNumber to a byte array. Also, I do not want to use any library for that like BigInt, BInt, etc.
I tried this one:
static func toByteArray<T>(_ value: T) -> [UInt8] {
var value = value
return withUnsafeBytes(of: &value) { Array($0) }
}
let value = NSDecimalNumber(string: "...")
let bytes = toByteArray(value.uint64Value)
If the number is not bigger than Uint64, it works great. But, what if it is, how can we convert it?
The problem is obviously the use of uint64Value, which obviously cannot represent any value greater than UInt64.max, and your example, 59,785,897,542,892,656,787,456, is larger than that.
If you want to grab the byte representations of the 128 bit mantissa, you can use _mantissa tuple of UInt16 words of Decimal, and convert them to bytes if you want. E.g.
extension Decimal {
var byteArray: [UInt8] {
return [_mantissa.0,
_mantissa.1,
_mantissa.2,
_mantissa.3,
_mantissa.4,
_mantissa.5,
_mantissa.6,
_mantissa.7]
.flatMap { [UInt8($0 & 0xff), UInt8($0 >> 8)] }
}
}
And
if let foo = Decimal(string: "59785897542892656787456") {
print(foo.byteArray)
}
Returning:
[0, 0, 0, 0, 0, 0, 0, 0, 169, 12, 0, 0, 0, 0, 0, 0]
This, admittedly, only defers the problem, breaking the 64-bit limit of your uint64Value approach, but is still constrained to the inherent 128-bit limit of NSDecimalNumber/Decimal. To capture numbers greater than 128 bits, you'd need a completely different representation.
NB: This also assumes that the exponent is 0. If, however, you had some large number, e.g. 4.2e101 (4.2 * 10101), the exponent will be 100 and the mantissa will simply be 42, which I bet is probably not what you want in your byte array. Then again, this is an example of a number that is too large to represent as a single 128 bit integer, anyway:
if let foo = Decimal(string: "4.2e101") {
print(foo.byteArray)
print(foo.exponent)
}
Yielding:
[42, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]
100
You are passing value.uint64Value and not value. So the number will be wrapped around zero to fit the NSDecimal into a UInt64 format.
Passing "18446744073709551616" (which is the string corresponding to UInt64.max + 1) is equivalent to passing "0". Passing "18446744073709551617" (which is the string corresponding to UInt64.max + 2) is equivalent to passing "1".

Extract bits into a int slice from byte slice

I have following byte slice which from which i need to extract bits and place them in a []int as i intend to fetch individual bit values later. I am having a hard time figuring out how to do that.
below is my code
data := []byte{3 255}//binary representation is for 3 and 255 is 00000011 11111111
what i need is a slice of bits -- > [0,0,0,0,0,0,1,1,1,1,1,1,1,1,1,1]
What i tried
I tried converting byte slice to Uint16 with BigEndian and then tried to use strconv.FormatUint but that fails with error panic: runtime error: index out of range
Saw many examples that simple output bit representation of number using fmt.Printf function but that is not useful for me as i need a int slice for further bit value access.
Do i need to use bit shift operators here ? Any help will be greatly appreciated.
One way is to loop over the bytes, and use a 2nd loop to shift the byte values bit-by-bit and test for the bits with a bitmask. And add the result to the output slice.
Here's an implementation of it:
func bits(bs []byte) []int {
r := make([]int, len(bs)*8)
for i, b := range bs {
for j := 0; j < 8; j++ {
r[i*8+j] = int(b >> uint(7-j) & 0x01)
}
}
return r
}
Testing it:
fmt.Println(bits([]byte{3, 255}))
Output (try it on the Go Playground):
[0 0 0 0 0 0 1 1 1 1 1 1 1 1 1 1]
Using the bits package provides a fairly straightforward solution.
func bitsToBits(data []byte) (st []int) {
st = make([]int, len(data)*8) // Performance x 2 as no append occurs.
for i, d := range data {
for j := 0; j < 8; j++ {
if bits.LeadingZeros8(d) == 0 {
// No leading 0 means that it is a 1
st[i*8+j] = 1
} else {
st[i*8+j] = 0
}
d = d << 1
}
}
return
}
Performance is comparable to similar solutions.

Bitwise bit shifting and why this solution works

I've been doing code fights on codefights.com and I came upon this problem below. I have figured out the problem on my own, but when i researched other peoples solutions I found one that was much shorter than mine, but I cant seem to understand why they did what they did.
The question goes:
You are given an array of up to four non-negative integers, each less than 256.
Your task is to pack these integers into one number M in the following way:
The first element of the array occupies the first 8 bits of M;
The second element occupies next 8 bits, and so on.
Return the obtained integer M.
Note: the phrase "first bits of M" refers to the least significant bits of M - the right-most bits of an integer. For further clarification see the following example.
Example
For a = [24, 85, 0], the output should be
arrayPacking(a) = 21784.
An array [24, 85, 0] looks like [00011000, 01010101, 00000000] in binary.
After packing these into one number we get 00000000 01010101 00011000 (spaces are placed for convenience), which equals to 21784.
Their answer was:
func arrayPacking(a []int) (sum int) {
for i := range a {
sum += a[len(a) - i - 1] << uint((len(a) - i - 1) * 8)
}
return
}
How is this code returning the right amount of shift just by using 0, 8, 16, etc. intervals? I've been researching bitwise a lot lately, but I still can't seem to reason why this works.
First, write the solution in Go. We convert little-endian, base-256 digits to a base-2 (binary) number. Shifting left by 8 bits multiplies by 256.
package main
import (
"fmt"
)
func pack(digits []int) (number int) {
// digits are little-endian, base-256 (1 << 8 = 256)
for i, digit := range digits {
number += digit << uint(i*8)
}
return number
}
func main() {
a := []int{24, 85, 0}
m := pack(a)
fmt.Println(m)
}
Playground: https://play.golang.org/p/oo_n7CiAHwG
Output:
21784
Now you should be able to figure out their ugly looking answer:
func arrayPacking(a []int) (sum int) {
for i := range a {
sum += a[len(a) - i - 1] << uint((len(a) - i - 1) * 8)
}
return
}
the bitshifting by multiples of 8 is the same as muliplying by multiples of 256, e.g. x << 0*8 == x * 256⁰, x << 1*8 == x * 256¹, x << 2*8 == x * 256² etc., so the code can be rewritten like this, using math.Pow:
func arrayPacking(a []int) (sum int) {
for i := range a {
sum += a[len(a) - i - 1] * int(math.Pow(256, (len(a) - i - 1)))
}
return
}
Or is your question why this sort of packing works?

How to insert zeros every x bits

I represent a square of 1s and 0s with an integer type variable. Here's what it might look like:
1 1 1 0
0 1 0 0
0 0 0 0
0 0 0 0
The corresponding integer is coded from the bottom right corner to the top left, and the top left corner corresponds to 2^0. An integer representation of this grid is: 0000 0000 0010 0111 (2) = 39(10)
The grid is always a square with known width. In my code, I need to increment the width by 1 a lot of times, thus changing the map to:
1 1 1 0 0
0 1 0 0 0
0 0 0 0 0
0 0 0 0 0
0 0 0 0 0
Which is equivalent to inserting a 0 every x = width bits (width = 4 in the example). Overflows can't happen because the size of the integer is much larger than required.
In C, how can I insert a zero every x bits?
Edit1: since this operation will be happening a lot of times on numerous grids, loops are to be avoided.
Edit2: the reason I use this packing is because I do bitwise operations between grids to search for combinations.
Look at the problem in the way you would look as adding 0 to every forth position in decimal number. What you would do in decimal is:
1 513 650 -> 105 130 650
You can see, that your are just preserving the numbers less than 10^3 and multiplying the rest by 10. Than preserving the number less than 10^7 (not 10^6, because you already multiplied) and multypling the right side by 10....
It works same way for binary.
#include <iostream>
#include <cmath>
int power(int x, int n){
return n == 1 ? x : x*power(x,n-1);
}
int main(int argc, char const *argv[])
{
int n = 4;
int grid = 1024;
int grid_len = std::floor(std::log2(grid)) ; // what is the position of last 1, = how long i need to insert 0
for (int steps = 0; steps*n < grid_len; ++steps)
{
std::cout << steps << std::endl;
int preserve = grid % power(2,n*(steps+1) );
grid = (grid - preserve) * 2;
grid += preserve;
}
std::cout << grid << std::endl;
return 0;
}
The code prints 71, which in binary is 1000111, so it works on example you provided.

Resources