I'm trying to understand how binary arrays work. Here is a CAPL example that converts a decimal number into a binary array:
byte binaryArray[16];
binary ( int number )
{
int index;
index = 0;
for ( ; number != 0; )
{
binaryArray[index++] = number % 10;
number = number / 10;
}
}
If the input is 1234, the output is apparently 11010010
If I'm not mistaken, the for loop runs 4 times:
1234 mod 10 -> 4
123 mod 10 -> 3
12 mod 10 -> 2
1 mod 10 -> 1
If we weren't dealing with a binary array, it would look like this: { 4, 3, 2, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 }. But it is a binary array, and the "conversion" should happen here: binaryArray[index++] = number % 10; (number is a 16-bit signed integer, binaryArray is a 8-bit unsigned byte).
So how does one convert (by hand) an int to a byte?
I find it very hard to "guess" what your actual intent is and the given example is not really fitting to what I think you want to do. But I will give it a try.
As you have already explained by yourself, your example code always cuts the least significant digit in a base-10 system (i.e. the ones place) of the given integer "number" and then stores it sequentially into a byte array.
If the input is 1234, the output is apparently 11010010
This statement is false. Currently, if the input for your given function is 1234, the "output" (i.e. binaryArray contents) is
binaryArray = { 4, 3, 2, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 }
Furthermore, the actual binary representation (assuming "MSB0 first/left" and big-endian) of your input number as byte array is
{ 0b00000100, 0b11010010 }
Because of your "apparently" (wrong) statement and your final question, I guess what you really want to achieve and what you're actually asking for is: serialization of an integer into a byte array - which first seems quite simple, but actually there are some pitfalls you can run into for multi-byte values - especially when you work together with others (e.g. endianness and bit-ordering).
Assuming you have a 16-bit integer, you could store the first 8-bits (byte) in binaryArray[0] and then shift your input integer by 8 bits to the right (since you already stored those). Now you can finally store the remaining 8 bits into binaryArray[1].
Given your example input of 1234 you will end up with the array
binaryArray = { 210, 4, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 }
which is equivalent to its "binary" representation:
binaryArray = { 0b11010010, 0b00000100, 0b00000000, 0b00000000, 0b00000000, 0b00000000, 0b00000000, 0b00000000, 0b00000000, 0b00000000, 0b00000000, 0b00000000, 0b00000000, 0b00000000, 0b00000000 }
Notice that this time the byte-order (i.e. endianness) is reversed (little-endian) since we fill the array "bottom up" and the inputs "binary values" are read "right-to-left".
But since you have this 16-cells byte array you might instead want to convert the integer "number" into an array, representing its binary format, e.g. binaryArray = { 0, 0, 0, 0, 0, 1, 0, 0, 1, 1, 0, 1, 0, 0, 1, 0 }.
You could achieve this easily by substituting the "modulo 10" / "divide by 10" with "modulo 2" / "divide by 2" in your code - at least for unsigned integers. For signed integers, it should also work that way™ but you may get negative values for the modulo (and of course for the division by 2, since the dividend is non-positive and the divisor is positive).
So to make this work without thinking about whether a number is signed or not, just grab one single bit after the other and right-sift the input until it is 0, while decrementing your arrays index, i.e. filling it "top-down" (since MSB0 first / left).
byte binaryArray[16];
binary ( int number )
{
int index;
index = 15;
for ( ; number != 0; )
{
binaryArray[index--] = number & 0x01;
number = number >> 1;
}
}
Side note: runtime (assuming one instruction as single effort) is the same as "modulo/divide by 2", since right-shifting by one equals dividing by 2. Actually, it is even a bit better, since binary-and (&) is "cheaper" than modulo (%).
But keep track of bit-ordering and endianness for this kind of conversions.
Related
I want to convert an NSDecimalNumber to a byte array. Also, I do not want to use any library for that like BigInt, BInt, etc.
I tried this one:
static func toByteArray<T>(_ value: T) -> [UInt8] {
var value = value
return withUnsafeBytes(of: &value) { Array($0) }
}
let value = NSDecimalNumber(string: "...")
let bytes = toByteArray(value.uint64Value)
If the number is not bigger than Uint64, it works great. But, what if it is, how can we convert it?
The problem is obviously the use of uint64Value, which obviously cannot represent any value greater than UInt64.max, and your example, 59,785,897,542,892,656,787,456, is larger than that.
If you want to grab the byte representations of the 128 bit mantissa, you can use _mantissa tuple of UInt16 words of Decimal, and convert them to bytes if you want. E.g.
extension Decimal {
var byteArray: [UInt8] {
return [_mantissa.0,
_mantissa.1,
_mantissa.2,
_mantissa.3,
_mantissa.4,
_mantissa.5,
_mantissa.6,
_mantissa.7]
.flatMap { [UInt8($0 & 0xff), UInt8($0 >> 8)] }
}
}
And
if let foo = Decimal(string: "59785897542892656787456") {
print(foo.byteArray)
}
Returning:
[0, 0, 0, 0, 0, 0, 0, 0, 169, 12, 0, 0, 0, 0, 0, 0]
This, admittedly, only defers the problem, breaking the 64-bit limit of your uint64Value approach, but is still constrained to the inherent 128-bit limit of NSDecimalNumber/Decimal. To capture numbers greater than 128 bits, you'd need a completely different representation.
NB: This also assumes that the exponent is 0. If, however, you had some large number, e.g. 4.2e101 (4.2 * 10101), the exponent will be 100 and the mantissa will simply be 42, which I bet is probably not what you want in your byte array. Then again, this is an example of a number that is too large to represent as a single 128 bit integer, anyway:
if let foo = Decimal(string: "4.2e101") {
print(foo.byteArray)
print(foo.exponent)
}
Yielding:
[42, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]
100
You are passing value.uint64Value and not value. So the number will be wrapped around zero to fit the NSDecimal into a UInt64 format.
Passing "18446744073709551616" (which is the string corresponding to UInt64.max + 1) is equivalent to passing "0". Passing "18446744073709551617" (which is the string corresponding to UInt64.max + 2) is equivalent to passing "1".
This question already has answers here:
log2 of an integer that is a power of 2 [duplicate]
(2 answers)
Closed 4 years ago.
I am searching for an algorithm to determine which single bit (there is always only one bit set) within a 16 bit (uint16_t) variable like 0x200. I found a very nice short and effective code to do this using a lookup table for 8 bit variables, like this:
int[] lookup = {7, 0, 5, 1, 6, 4, 3, 2};
int getBitPosition(unsigned char b) {
return lookup[((b * 0x1D) >> 4) & 0x7];
}
How to expand this for using 16 bit as input?
What about:
int bitset(unsigned short s)
{
int lookup[] = {16, 1, 11, 2, 14, 12, 3, 6, 15, 10, 13, 5, 9, 4, 8, 7};
return lookup[(((int) s*0x6F28)>>14)&15];
}
Test:
int main(void)
{
int i, j;
for (i = j = 1; i <= 16; ++i, j <<=1)
{
printf("for %5d, the %3dth bit is set\n", j, bitset(j));
}
return 0;
}
Gives:
for 1, the 1th bit is set
for 2, the 2th bit is set
for 4, the 3th bit is set
for 8, the 4th bit is set
for 16, the 5th bit is set
for 32, the 6th bit is set
for 64, the 7th bit is set
for 128, the 8th bit is set
for 256, the 9th bit is set
for 512, the 10th bit is set
for 1024, the 11th bit is set
for 2048, the 12th bit is set
for 4096, the 13th bit is set
for 8192, the 14th bit is set
for 16384, the 15th bit is set
for 32768, the 16th bit is set
Explanation
The first algorithm (8bits) is the following:
Build a unique number from power of two numbers (1, 2, 4...)
The incoming number s can be decomposed in bits:
s7s6s5s4s3s2s1s0, for instance, s == 4 means s2 = 1, other = 0
One number is build summing the different sx:
This operation is done by the * operation (0x1D is 11101b)
|s7s6s5s4|s3s2s1s0 // 1b
s7s6|s5s4s3s2|s1s0 0 0 // 100b
s7s6s5|s4s3s2s1|s0 0 0 0 // 1000b
s7s6s5s4|s3s2s1s0| 0 0 0 0 // 10000b
Look at the number between pipe: it's unique,
if s is 1, the sum (between pipe) is 0b+0b+0b+1b = 1
if s is 2, the sum (between pipe) is 0b+0b+1b+10b = 2
if s is 4, the sum (between pipe) is 0b+1b+10b+100b = 7
if s is 8, the sum (between pipe) is 0b+10b+100b+1000b = 14
and so
The >> and & operations make the selection of the 4 middles bits.
Finally, a simple lookup table is to be applied to the unique number to get the set bit.
The 16 bits algorithm is a generalisation of this, the difficulty has been to find a combination of bits that gives unique numbers for powers of two.
I have access to an instrument that runs a C-style scripting language on it. It can declare and use variables or arrays of char, int and double but not float. It allows the operands of standard C logic and addition, subtraction, division and multiplication. It doesn't have useful functions such as sizeof() and doesn't have any of the bit shift operators <<, >>.
I am trying to get it to send a double value over two binary output ports. Each can be set High or low.
I was thinking that I should do this by bit-shift masking the double value using the bit-wise AND comparator. However I CAN'T do this because bit shift operators don't exist. I would then use one output port as a clock and the second as the synced data line.
For example using an input byte with value = 6 (0000 0110). The data would be output like below with X denoting the read value on the 'clock' down stroke:
*Clock, Input
* 0, 0
* 1, 0 X
* 0, 0
* 1, 1 X
* 0, 0
* 1, 1 X
* 0, 0
* 1, 0 X
* 0, 0
* 1, 0 X
* 0, 0
* 1, 0 X
* 0, 0
* 1, 0 X
* 0, 0
* 1, 0 X
* 0, 0
So I need a way of iterating through the double bit-by-bit (not sure how many bits the instrument uses for its double) and setting the output flag to its value but this can't be done with bit-shift because I don't have it.
Shifting a value is equivalente to multiply/divide by two (using integer math):
a / 2 equivalent to a >> 1
a * 2 equivalent to a << 1
You need to check that the scripting language do integer math (or use the floor() or int() or trunc() or wathever the language offers).
Be also careful with overflow, if the scripting language uses float instead of ints to represent numbers, you may expect strange behaviour with big numbers.
Another caveat on signedness. If you have to deal with negative numbers, shifting to left is more complicated.
Can you run a couple of tests to check the size of integer numbers? It will surely help you avoid problems.
A left shift 1 is equal to a multiply by 2, a right shift 1 is equal to (integer) division by 2. So, a left shift of 3 is equal to multiplication by 8 (because it is 23).
#include <stdio.h>
int main() {
int a = 17 << 4;
int b = 17 * 16;
if (a == b) {
printf("%i\n", b);
} else {
puts("false");
}
}
Output is
272
Assuming the measurements you are trying to retrieve are truly doubles: If you know the allowable range of the measurement, I would multiply them by a factor of 0xFFFFFFFF/MAX_RANGE factor to give you a value between 0 and int max.
Then you can do
long int value = double*FACTOR;
for (i=0;i<32;i++) {
long int nextval = value / 2;
char bit = value - (nextval * 2);
//send bit here
value = nextval;
}
This will be better than trying to work with the bitwise representation of floats without masks and shifting.
Bit shifts are just like multiplication or division, so the simplest solution to shift left/right N bits is just multiply/divide by 2N
const unsigned int SHIFT_BY[] = { 1U, 2U, 4U, 8U, 16U, 32U, 64U, 128U, 256U, 512U,
1024U, 2048U, 4096U, 8192U, 16384U, 32768U, 65536U, 131072U, 262144U, 524288U,
1048576U, 2097152U, 4194304U, 8388608U, 16777216U, 33554432U, 67108864U,
134217728U, 268435456U, 536870912U, 1073741824U, 2147483648U};
unsigned int lshift(unsigned int x, unsigned int numshift)
{
return x*SHIFT_BY[numshift];
}
unsigned int rshift(unsigned int x, unsigned int numshift)
{
return x/SHIFT_BY[numshift];
}
But that won't be efficient for small shift steps or on architectures without division/multiplication. In that case just add the number to itself since it's equivalent to multiplying it by 2 which may improve performance a bit
unsigned int lshift(unsigned int x, unsigned int numshift)
{
for (i = 0; i < numshift; i++)
x += x;
return x;
}
Right shift is a lot trickier to implement with only addition/subtraction. If you don't want to use the slow division then you can use a lookup table
unsigned int rshift1(unsigned int x)
{
const unsigned int RSHIFT1[] = { 0, 0, 1, 1, 2, 2, 3, 3, 4, 4, 5, 5, 6, 6, 7, 7 };
assert(x < 16);
retrn RSHIFT1[x];
}
Above is just a simple 4-bit LUT to shift by 1. You can use a bigger LUT if you want/can. You can also implement shifting by more than 1 as a 2D array, or by shifting by 1 multiple times. This method can also be applied to left shift
I want to compare 2 int8x8_t,
From http://gcc.gnu.org/onlinedocs/gcc/ARM-NEON-Intrinsics.html
we can get the description for vclt_s8, but it does not tell us much details.
`uint8x8_t vclt_s8 (int8x8_t, int8x8_t)`
Form of expected instruction(s): vcgt.s8 d0, d0, d0
the return value uint8x8_t, it confuse me for I can not use
if(vclt_s8(a, b)) to decide the first is smaller.
Then suppose we have two int8x8_t: int8x8_t a and int8x8_t b, how do we know whether a is smaller?
You may find more details in the official ARM documentation for NEON.
The generic description for all comparison functions states:
If the comparison is true for a lane, the result in that lane is all bits set to one. If the comparison is false for a lane, all bits are set to zero. The return type is an unsigned integer type.
Suppose you have: (this is pseudo code, [] meaning the 8 values of each vector)
int8x8_t a = [-1, -1, -1, -1, 1, 1, 1, 1];
int8x8_t b = [ 0, 0, 0, 0, 0, 0, 0, 0];
uint8x8_t c = vclt_s8(a, b);
You'll get:
c = [255, 255, 255, 255, 0, 0, 0, 0];
The 4 first values of a are less than the 4 first values of b: all bits of the first 4 values of c are set to 1, making them 255.
In the same way, all 4 last values are greater: all bits of the 4 last values of c are set to 0, making them 0.
Hope this helps!
vclt_s8 - Vector Compare for Less Than with each vector datatype is signed char.
vclt.datatype dest, reg1, reg2.
reg1 - a signed char vector- carrying 8 elements of that datatype.
reg2 - must be of same datatype as the above one.
compare element by element of reg1 with reg2.
place all bits as one in the corresponding vector in dest if the condition is true,
else places all the bits as zeros.
This question already has answers here:
Closed 11 years ago.
Possible Duplicate:
Hex to char array in C
I have a char[10] array that contains hex characters, and I'd like to end up with a byte[5] array of the values of those characters.
In general, how would I go from a char[2] hex value (30) to a single decimal byte (48)?
Language is actually Arduino, but basic C would be best.
1 byte = 8 bit = 2 x (hex digit)
What you can do is left shift the hex-digit stored in a byte by 4 places (alternatively multiply my 16) and then add the 2nd byte which has the 2nd hex digit.
So to convert 30 in hex to 48 in decimal:
take first hex digit, here 3; multiply it by 16 getting 3*16 = 48
add the second byte, here 0; getting 48+0 = 48 which is your final answer
Char array in "ch", byte array as "out"
byte conv[23] = {0, 1, 2, 3, 4, 5, 6, 7, 8, 9, -1, -1, -1, -1, -1, -1, -1, 10, 11, 12, 13, 14, 15};
// Loop over the byte array from 0 to 9, stepping by 2
int j = 0;
for (int i = 0; i < 10; i += 2) {
out[j] = conv[ch[i]-'0'] * 16 + conv[ch[i+1]-'0'];
j++;
}
Untested.
The trick is in the array 'conv'. It's a fragment of an ASCII table, starting with the character for 0. The -1's are for the junk between '9' and 'A'.
Oh, this is also not a safe or defensive routine. Bad input will result in crashes, glitches, etc.