How to convert 10bit unsigned bytearray to uint16 bytearray in C? - c

I have an img in binary format format.
Each pixel is 10bit.
They are putted consecutively, like [9-2] in first byte and [1:0] in second byte.
Where 9 IS MSB bit, 0 - LSB.
How to convert them to 16bit array?
E.g:
Store like 8 bit:
0b10000011,
0b10101100,
0b10011001,
0b11000000,
0b10101000,
Actual data which contains real pixels and which I want convert to 16bit:
0b1000001110,
0b1011001001,
0b1001110000,
0b0010101000,

It seems you want to convert data like this
img[0] = AAAAAAAA
img[1] = AABBBBBB
img[2] = BBBBCCCC
img[3] = CCCCCCDD
img[4] = DDDDDDDD
to data like this:
array[0] = 000000AAAAAAAAAA
array[1] = 000000BBBBBBBBBB
array[2] = 000000CCCCCCCCCC
array[3] = 000000DDDDDDDDDD
It can be done like this:
array[0] = ((img[0] << 2) | (img[1] >> 6)) & 0x3ff;
array[1] = ((img[1] << 4) | (img[2] >> 4)) & 0x3ff;
array[2] = ((img[2] << 6) | (img[3] >> 2)) & 0x3ff;
array[3] = ((img[3] << 8) | img[4] ) & 0x3ff;
To convert multiple blocks, for loop is useful like this:
for (int i = 0; (i + 1) * 5 <= num_of_bytes_in_img; i++) {
array[i * 4 + 0] = ((img[i * 5 + 0] << 2) | (img[i * 5 + 1] >> 6)) & 0x3ff;
array[i * 4 + 1] = ((img[i * 5 + 1] << 4) | (img[i * 5 + 2] >> 4)) & 0x3ff;
array[i * 4 + 2] = ((img[i * 5 + 2] << 6) | (img[i * 5 + 3] >> 2)) & 0x3ff;
array[i * 4 + 3] = ((img[i * 5 + 3] << 8) | img[i * 5 + 4] ) & 0x3ff;
}
(this loop won't convert last a few bytes, so you should treat them separately or add padding to img not to read out-of-range while using this loop to end of the image)

Related

Why does data packing 4 integers into a 32 bit integer have different results in Nextion and Teensy(Arduino compatible)

I'm controlling a Teensy 3.5 with a Nextion touchscreen. On the Nextion the following code packs 4 8 bit integers into a 32 bit integer:
sys0=vaShift_24.val<<8|vaShift_16.val<<8|vaShift_8.val<<8|vaShift_0.val
Using the same shift amount (8) has a different result on the Teensy, however, the following generates the same result:
combinedValue = (v24 << 24) | (v16 << 16) | (v08 << 8) | (v00);
I'm curious why these shifts work differently.
Nextion documentation: https://nextion.tech/instruction-set/
//Nextion:
vaShift_24.val=5
vaShift_16.val=4
vaShift_8.val=1
vaShift_0.val=51
sys0=vaShift_24.val<<8|vaShift_16.val<<8|vaShift_8.val<<8|vaShift_0.val
//Result is 84148531
//Teensy, Arduino, C++:
value24 = 5;
value16 = 4;
value8 = 1;
value0 = 51;
packedValue = (value24 << 24) | (value16 << 16) | (value8 << 8) | (value0);
Serial.print("24 to 0: ");
Serial.println(packedValue);
packedValue = (value24 << 8) | (value16 << 8) | (value8 << 8) | (value0);
Serial.print("8: ");
Serial.println(packedValue);
//Result:
//24 to 0: 84148531
//8: 1331
Problem seems to be in this line:
sys0=vaShift_24.val<<8|vaShift_16.val<<8|vaShift_8.val<<8|vaShift_0.val
You are shifting by 8 in many places. Presumably you want:
sys0 = vaShift_24.val << 24 | vaShift_16.val << 16 | vaShift_8.val << 8 | vaShift_0.val
Now result from bytes 5, 4, 1, and 55 should be, in hex
0x05040133.
If you are instead seeing
0x33010405
it means you would also have a byte order issue. But probably not.

k&r exercise 2-6 "setbits"

I've seen the answer here: http://clc-wiki.net/wiki/K%26R2_solutions:Chapter_2:Exercise_6
and i've tested the first, but in this part:
x = 29638;
y = 999;
p = 10;
n = 8;
return (x & ((~0 << (p + 1)) | (~(~0 << (p + 1 - n)))))
in a paper it give to me a 6, but in the program it return 28678...
in this part:
111001111000110
&000100000000111
in the result, the left-most three bits has to be 1's like in x but the bitwise operator & says:
The output of bitwise AND is 1 if the corresponding bits of all operands is 1. If either bit of an operand is 0, the result of corresponding bit is evaluated to 0.
so why it returns the number with thats 3 bits in 1?
Here we go, one step at a time (using 16-bit numbers). We start with:
(x & ((~0 << (p + 1)) | (~(~0 << (p + 1 - n)))))
Substituting in numbers (in decimal):
(29638 & ((~0 << (10 + 1)) | (~(~0 << (10 + 1 - 8)))))
Totalling up the bit shift amounts gives:
(29638 & ((~0 << 11) | (~(~0 << 3))))
Rewriting numbers as binary and applying the ~0s...
(0111001111000110 & ((1111111111111111 << 1011) | (~(1111111111111111 << 0011))))
After performing the shifts we get:
(0111001111000110 & (1111100000000000 | (~ 1111111111111000)))
Applying the other bitwise-NOT (~):
(0111001111000110 & (1111100000000000 | 0000000000000111))
And the bitwise-OR (|):
0111001111000110 & 1111100000000111
And finally the bitwise-AND (&):
0111000000000110
So we then have binary 0111000000000110, which is 2 + 4 + 4096 + 8192 + 16384, which is 28678.

Why is 1 << 3 equal to 8 and not 6? [duplicate]

This question already has answers here:
What are bitwise shift (bit-shift) operators and how do they work?
(10 answers)
Closed 8 years ago.
In C I have this enum:
enum {
STAR_NONE = 1 << 0, // 1
STAR_ONE = 1 << 1, // 2
STAR_TWO = 1 << 2, // 4
STAR_THREE = 1 << 3 // 8
};
Why is 1 << 3 equal to 8 and not 6?
Shifting a number to the left is equivalent to multiplying that number by 2n where n is the distance you shifted that number.
To see how is that true lets take an example, suppose we have the number 5 so we shift it 2 places to the left, 5 in binary is 00000101 i.e.
0×27 + 0×26 + 0×25 + 0×24 + 0×23 + 1×22 + 0×21 + 1×20 = 1×22 + 1×20 = 4 + 1 = 5
now, 5 << 2 would be 00010100 i.e.
0×27 + 0×26 + 0×25 + 1×24 + 0×23 + 1×22 + 0×21 + 0×20 = 1×24 + 1×22 = 16 + 4 = 20
But we can write 1×24 + 1×22 as
1×24 + 1×22 = (1×22 + 1×20)×22 = 5×4 → 20
and from this, it is possible to conclude that 5 << 2 is equivalent to 5×22, it would be possible to demonstrate that in general
k << m = k×2m
So in your case 1 << 3 is equivalent to 23 = 8, since
1 << 3 → b00000001 << 3 → b00001000 → 23 -> 8
If you instead did this 3 << 1 then
3 << 1 → b00000011 << 1 → b00000110 → 22 + 21 → 4 + 2 → 6
1 in binary is 0000 0001
by shifting to left by 3 bits (i.e. 1 << 3) you get
0000 1000
Which is 8 in decimal and not 6.
Because two to the power of three is eight.
Think in binary.
You are actually doing this.
0001 shifted 3 times to the left = 1000 = 8

Counting consecutive 1's in C [duplicate]

This question already has answers here:
Closed 10 years ago.
Possible Duplicate:
Finding consecutive bit string of 1 or 0
Is it possible to count, from left, consecutive 1's in an integer?
So: total number of consecutive set bits starting from the top bit.
Using only:
! ~ & ^ | + << >>
-1= 0xFFFFFFFF would return 32
0xFFF0F0F0 would return 12 (FFF = 111111111111)
No loops, unfortunately.
Can assume the machine:
Uses 2s complement, 32-bit representations of integers.
Performs right shifts arithmetically.
Has unpredictable behavior when shifting an integer by more
than the word size.
I'm forbidden to:
Use any control constructs such as if, do, while, for, switch, etc.
Define or use any macros.
Define any additional functions in this file.
Call any functions.
Use any other operations, such as &&, ||, -, or ?:
Use any form of casting.
Use any data type other than int. This implies that you
cannot use arrays, structs, or unions.
I've looked at
Finding consecutive bit string of 1 or 0
It's using loops, which I can't use. I don't even know where to start.
(Yes, this is an assignment, but I'm simply asking those of you skilled enough for help. I've done pretty much all of those I need to do, but this one just won't work.)
(For those downvoting simply because it's for school:
FAQ:
1 a specific programming problem, check
2 However, if your motivation is “I would like others to explain ______ to me”, then you are probably OK.)
You can do it like this:
int result = clz(~x);
i.e. invert all the bits and then count leading zeroes.
clz returns the number of leading zero bits (also commonly known as ffs or nlz) - see here for implementation details: http://en.wikipedia.org/wiki/Find_first_set#Algorithms
Here you are. The function argument may be signed or unsigned. The alg is independent on signedness.
int leftmost_ones(int x)
{
x = ~x;
x = x | x >> 1 | x >> 2 | x >> 3 | x >> 4 | x >> 5 | x >> 6 | x >> 7 |
x >> 8 | x >> 9 | x >> 10 | x >> 11 | x >> 12 | x >> 13 | x >> 14 |
x >> 15 | x >> 16 | x >> 17 | x >> 18 | x >> 19 | x >> 20 | x >> 21 |
x >> 22 | x >> 23 | x >> 24 | x >> 25 | x >> 26 | x >> 27 | x >> 28 |
x >> 29 | x >> 30 | x >> 31;
x = ~x;
return (x & 1) + (x >> 1 & 1) + (x >> 2 & 1) + (x >> 3 & 1) + (x >> 4 & 1) +
(x >> 5 & 1) + (x >> 6 & 1) + (x >> 7 & 1) + (x >> 8 & 1) + (x >> 9 & 1) +
(x >> 10 & 1) + (x >> 11 & 1) + (x >> 12 & 1) + (x >> 13 & 1) + (x >> 14 & 1) +
(x >> 15 & 1) + (x >> 16 & 1) + (x >> 17 & 1) + (x >> 18 & 1) + (x >> 19 & 1) +
(x >> 20 & 1) + (x >> 21 & 1) + (x >> 22 & 1) + (x >> 23 & 1) + (x >> 24 & 1) +
(x >> 25 & 1) + (x >> 26 & 1) + (x >> 27 & 1) + (x >> 28 & 1) + (x >> 29 & 1) +
(x >> 30 & 1) + (x >> 31 & 1);
}
A version with some optimization:
int leftmost_ones(int x)
{
x = ~x;
x |= x >> 16;
x |= x >> 8;
x |= x >> 4;
x |= x >> 2;
x |= x >> 1;
x = ~x;
return (x & 1) + (x >> 1 & 1) + (x >> 2 & 1) + (x >> 3 & 1) + (x >> 4 & 1) +
(x >> 5 & 1) + (x >> 6 & 1) + (x >> 7 & 1) + (x >> 8 & 1) + (x >> 9 & 1) +
(x >> 10 & 1) + (x >> 11 & 1) + (x >> 12 & 1) + (x >> 13 & 1) + (x >> 14 & 1) +
(x >> 15 & 1) + (x >> 16 & 1) + (x >> 17 & 1) + (x >> 18 & 1) + (x >> 19 & 1) +
(x >> 20 & 1) + (x >> 21 & 1) + (x >> 22 & 1) + (x >> 23 & 1) + (x >> 24 & 1) +
(x >> 25 & 1) + (x >> 26 & 1) + (x >> 27 & 1) + (x >> 28 & 1) + (x >> 29 & 1) +
(x >> 30 & 1) + (x >> 31 & 1);
}
Can you use a loop?
int mask = 0x80000000;
int count = 0;
while (number & mask) {
count += 1;
mask >>= 1;
}
I think it's doable, by basically unrolling the typical loop and being generally annoying.
How about this: an expression that is 1 if and only if the answer is 1? I offer:
const int ok1 = !((number & 0xc0000000) - 0x800000000);
The ! and subtraction are to work around that someone broke the == key on our keyboard, of course.
And then, an expression that is 1 if and only if the anwer is 2:
const int ok2 = !((number & 0xe0000000) - 0xc0000000);
If you continue to form these, the final answer is their sum:
const int answer = ok1 + ok2 + ... + ok32;
By the way, I can't seem to remember being given these weirdly restricted assignments when I was in school, I guess times have changed. :)
int count_consecutive_bits(unsigned int x) {
int res = 0;
while (x & 0x80000000) { ++res; x <<= 1; }
return res;
}

Extending bit twiddling hack for Log_2 to 64 bit

I've implemented the code here in C# to get the MSB of an int. I'm not certain what I need to do with the log reference table and the main code to extend the code to 64 bit.
The only thing the text says is it will take 2 more CPU ops, so I deduce the change is minor.
The table does not need to be changed. One more level of if() is needed:
if (ttt = v >> 32)
{
if (tt = ttt >> 16)
r = (t = tt >> 8) ? 56 + LogTable256[t] : 48 + LogTable256[tt]
else
r = (t = ttt >> 8) ? 40 + LogTable256[t] : 32 + LogTable256[ttt]
}
else
{
if (tt = v >> 16)
r = (t = tt >> 8) ? 24 + LogTable256[t] : 16 + LogTable256[tt];
else
r = (t = v >> 8) ? 8 + LogTable256[t] : LogTable256[v];
}

Resources