UTF-16 decoder not working as expected - c

I have a part of my Unicode library that decodes UTF-16 into raw Unicode code points. However, it isn't working as expected.
Here's the relevant part of the code (omitting UTF-8 and string manipulation stuff):
typedef struct string {
unsigned long length;
unsigned *data;
} string;
string *upush(string *s, unsigned c) {
if (!s->length) s->data = (unsigned *) malloc((s->length = 1) * sizeof(unsigned));
else s->data = (unsigned *) realloc(s->data, ++s->length * sizeof(unsigned));
s->data[s->length - 1] = c;
return s;
}
typedef struct string16 {
unsigned long length;
unsigned short *data;
} string16;
string u16tou(string16 old) {
unsigned long i, cur = 0, need = 0;
string new;
new.length = 0;
for (i = 0; i < old.length; i++)
if (old.data[i] < 0xd800 || old.data[i] > 0xdfff) upush(&new, old.data[i]);
else
if (old.data[i] > 0xdbff && !need) {
cur = 0; continue;
} else if (old.data[i] < 0xdc00) {
need = 1;
cur = (old.data[i] & 0x3ff) << 10;
printf("cur 1: %lx\n", cur);
} else if (old.data[i] > 0xdbff) {
cur |= old.data[i] & 0x3ff;
upush(&new, cur);
printf("cur 2: %lx\n", cur);
cur = need = 0;
}
return new;
}
How does it work?
string is a struct that holds 32-bit values, and string16 is for 16-bit values like UTF-16. All upush does is add a full Unicode code point to a string, reallocating memory as needed.
u16tou is the part that I'm focusing on. It loops through the string16, passing non-surrogate values through as normal, and converting surrogate pairs into full code points. Misplaced surrogates are ignored.
The first surrogate in a pair has its lowest 10 bits shifted 10 bits to the left, resulting in it forming the high 10 bits of the final code point. The other surrogate has its lowest 10 bits added to the final, and then it is appended to the string.
The problem?
Let's try the highest code point, shall we?
U+10FFFD, the last valid Unicode code point, is encoded as 0xDBFF 0xDFFD in UTF-16. Let's try decoding that.
string16 b;
b.length = 2;
b.data = (unsigned short *) malloc(2 * sizeof(unsigned short));
b.data[0] = 0xdbff;
b.data[1] = 0xdffd;
string a = u16tou(b);
puts(utoc(a));
Using the utoc (not shown; I know it's working (see below)) function to convert it back to a UTF-8 char * for printing, I can see in my terminal that I'm getting U+0FFFFD, not U+10FFFD as a result.
In the calculator
Doing all the conversions manually in gcalctool results in the same, wrong answer. So my syntax itself isn't wrong, but the algorithm is. The algorithm seems right to me though, and yet it's ending in the wrong answer.
What am I doing wrong?

You need to add on 0x10000 when decoding the surrogate pair; to quote rfc 2781, the step you're missing is number 5:
1) If W1 < 0xD800 or W1 > 0xDFFF, the character value U is the value
of W1. Terminate.
2) Determine if W1 is between 0xD800 and 0xDBFF. If not, the sequence
is in error and no valid character can be obtained using W1.
Terminate.
3) If there is no W2 (that is, the sequence ends with W1), or if W2
is not between 0xDC00 and 0xDFFF, the sequence is in error.
Terminate.
4) Construct a 20-bit unsigned integer U', taking the 10 low-order
bits of W1 as its 10 high-order bits and the 10 low-order bits of
W2 as its 10 low-order bits.
5) Add 0x10000 to U' to obtain the character value U. Terminate.
ie. one fix would be to add an extra line after your first read:
cur = (old.data[i] & 0x3ff) << 10;
cur += 0x10000;

You seem to be missing an offset of 0x10000.
According to this WIKI page, UTF-16 surrogate pairs are constructed like this:
UTF-16 represents non-BMP characters
(U+10000 through U+10FFFF) using two
code units, known as a surrogate pair.
First 1000016 is subtracted from the
code point to give a 20-bit value.
This is then split into two 10-bit
values each of which is represented as
a surrogate with the most significant
half placed in the first surrogate.

Related

sprintf to convert hexadecimal array to decimal char array only reads first byte

I have an array:
unsigned char datalog[4];
datalog[0] = 0;
datalog[1] = 0xce;
datalog[2] = 0x50;
datalog[3] = 0xa3;
These represent the hex value 0xce50a3. Its decimal value is 13521059.
I need to convert this hex value to a decimal array, preferably using sprintf, so that the final outcome will be:
finalarray[0] = '1';
finalarray[1] = '3';
finalarray[2] = '5';
finalarray[3] = '2';
finalarray[4] = '1';
finalarray[5] = '0';
finalarray[6] = '5';
finalarray[7] = '9';
I've tried several combinations of sprintf inputs, including concatenating my hex array into unsigned long datalogvalue = 0xce50a3. But sprintf only reads its first byte when it converts.
ex:
sprintf(finalarray, "%d", *(unsigned long *)datalog);
yields:
finalarray[0] = '2';
finalarray[1] = '0';
finalarray[2] = '6';
finalarray[3] = ' ';
.....
206 is the decimal representation of 0xce. So it's only converting the first hex byte and not the rest.
Any thoughts on how to convert the entire unsigned long into a decimal array?
As some others have mentioned, attempting to read the bytes of an array in order as a number will be system-dependent as Big Endian and Little Endian systems will give different results.
Furthermore, type-punning through pointer-trickery is undefined behavior as it breaks strict aliasing. The legal way to type pun to a type other than a char-family array involves using unions to represent the data in more than one fashion. Due to the above Endian issue, though, you should not do that for this problem and instead do the bit-shifting method as mentioned in R Sahu's answer.
A simply solution that does not depend on endian, int sizes or pointer tricks
Form the value
// LU to use unsigned long math
((datalog[0]*256LU + datalog[1])*256 + datalog[2])*256 + datalog[3]
Print it
sprintf(finalarray, "%lu", value);
Altogether
sprintf(finalarray, "%lu",
((datalog[0]*256LU + datalog[1])*256 + datalog[2])*256 + datalog[3]);
The outcome of casting a char* to unsigned long* and dereferencing that pointer depends on the endianness of your system. Unless efficiency of this particular calculation is critical for performance of your program, don't use such tricks. Use simple logic.
int res = (datalog[0] << 24) +
(datalog[1] << 16) +
(datalog[2] << 8) +
datalog[3];
sprintf(finalarray, "%d", res);
If you are required to use unsigned long for your type, make sure to use the right format specifier for unsigned long in the call to sprintf.
unsigned long res = (datalog[0] << 24) +
(datalog[1] << 16) +
(datalog[2] << 8) +
datalog[3];
sprintf(finalarray, "%lu", res);
First and foremost, endianness makes things abit troublesome here.
In order to be able to reinterpret your buffer as a 32 bit int you would have to take endianness into consideration when packing.
For example, on my system which is little-endian, datalog would be interpreted as: 2739981824 if converted to a 32 bit unsigned int.
Hence I would have to pack my data according to datalog2 in the example below in order to get the desired 13521059.
#include <stdlib.h>
#include <stdio.h>
#include <stdint.h>
int main() {
uint8_t datalog[4];
datalog[0] = 0;
datalog[1] = 0xce;
datalog[2] = 0x50;
datalog[3] = 0xa3;
uint32_t temp = *((uint32_t*) datalog);
printf("%u\n", temp); // 2739981824
uint8_t datalog2[4];
datalog2[0] = 0xa3;
datalog2[1] = 0x50;
datalog2[2] = 0xce;
datalog2[3] = 0;
uint32_t temp2 = *((uint32_t*) datalog2);
printf("%u\n", temp2); // 13521059
return 0;
}
There is however another problem with what you are asking.
If I interpret your question correctly, you would like to end up with another array where each of the decimals making up 13521059 in base-10, ends up in its own index.
In order to do this you would have to be able to address log2(10) bits with each index, something that is impossible.
Therefore in order to get an array with the packing that you suggest, you would have to manually convert it.
Due to endianess, the bytes do not appear in the order you think they do:
IDEOne Link
#include <stdio.h>
int main(void) {
unsigned char datalog[4];
char finalarray[20] = {0};
datalog[0] = 0xa3;
datalog[1] = 0x50;
datalog[2] = 0xce;
datalog[3] = 0x00;
sprintf(finalarray, "%lu", *(unsigned long*)datalog);
printf("Answer: %s\n", finalarray);
return 0;
}
Output
Success #stdin #stdout 0s 4180KB
Answer: 13521059

In C, how am I able to use the printf() function to 'store' a string?

I am attempting to represent a bit16 representation of a number (floating point representation) using unsigned integers. The fraction field here deviates from the standard of 10, and is 8 bits - implying the exponent field is 7 bits and the sign is 1 bit.
The code I have is as follows:
bit16 float_16(bit16 sign, bit16 exp, bit16 frac) {
//make the sign the number before binary point, make the fraction binary.
//concatenate the sign then exponent then fraction
//
bit16 result;
int theExponent;
theExponent = exp + 63; // bias = 2^(7-1) + 1 = 2^6 + 1 = 63
//printf("%d",sign);
int c, k;
for(c = 6; c > 0; c--)
{
k = theExponent >> c;
if( k & 1)
printf("1");
else
printf("0");
}
for(c = 7; c >= 0; c--)
{
k = frac >> c;
if( k & 1)
printf("1");
else
printf("0");
}
//return result;
}
My thinking to 'recreate' a 16 bit sequence from these fields is to concatenate them together as so, but if I want to use them in a further application I am unable to do so. Is there a way to store the final result after everything has been printed (16-bit sequence) into a variable which can then be represented as an unsigned integer? Or is there a more optimal way to do this procedure?
While printf will not work in this case (you can't 'store' it's result), you can use sprintf.
int sprintf ( char * output_str, const char * format, ... );
sprintf writes formatted data to string
Composes a string with the same text that would be printed if format was used on printf, but instead of being printed (or displayed on the console), the content is stored as a C string in the buffer pointed by output_str.
The size of the buffer should be large enough to contain the entire resulting string. See Buffer Overflow.
A terminating null character (\0) will automatically be appended at the end of your output_str.
From output_str to an integer variable
You can use the atoi function to do this. You can get your answer in an integer variable like this:
int i = atoi (output_str);

32-bit & 16-bit arithmetic on 8-bit microprocessor

I'm writing some code for an old 8-bit microprocessor (the Hu6280 - a WDC 65C02 derivative in the old NEC PC-Engine console) with 32kb of ram and up to 2.5mbytes of data/code rom. The language is a variant of Small-C but is limited to just the two following basic types:
char (1 byte)
int (2 byte)
It has no struct support and no long int support.
I'm writing a FAT filesystem library to interface with a SD card reader that was primarily developed for loading game ROM images, however an enterprising hacker has written some assembly to allow raw sector reading from the console side. He achieves this by stuffing the 4 8bit values of a 32bit sector address into 4 consecutive memory addresses (char address[4];).
My C code leverages his work to read (for the moment) the dos MBR boot sector and partition type information off the SD card. I've got MBR checksum verifcation and FAT partition detection working.
However, as I need to support FAT32 (which is what the FPGA on the SD card device supports), most of the sector and cluster arithmetic to look up directory entries and files will be based on 32bit LBA sector values.
What easy mechanisms do I have to do add/subtract/multiply 8/16/32bit integers, based on the above limitations? Does anyone have any ready made C routines to handle this? Maybe something along the lines of:
char int1[4], int2[4], int3[4];
int1[0] = 1;
int1[1] = 2;
int1[2] = 3;
int1[3] = 4;
int2[0] = 4;
int2[1] = 3;
int2[2] = 2;
int2[3] = 1;
int3 = mul_32(int1, int2);
int3 = add_32(int1, int2);
int3 = sub_32(int1, int2);`
EDIT: Based on the above replies, this is what I've come up with so far - this is untested as yet and I'll need to do similar for multiplication and subtraction:
char_to_int32(int32_result, int8)
char* int32_result;
char int8;
{
/*
Takes an unsigned 8bit number
and converts to a packed 4 byte array
*/
int32_result[0] = 0x00;
int32_result[1] = 0x00;
int32_result[2] = 0x00;
int32_result[3] = int8;
return 0;
}
int_to_int32(int32_result, int16)
char* int32_result;
int int16;
{
/*
Takes an unsigned 16bit number
and converts to a packed 4 byte array
*/
int32_result[0] = 0x00;
int32_result[1] = 0x00;
int32_result[2] = (int16 >> 8);
int32_result[3] = (int16 & 0xff);
return 0;
}
int32_is_zero(int32)
char* int32;
{
/*
Is a packed 4 byte array == 0
returns 1 if true, otherwise 0
*/
if ((int32[0] == 0) & (int32[1] == 0) & (int32[2] == 0) & (int32[3] == 0)) {
return 1;
} else {
return 0;
}
}
add_32(int32_result, int32_a, int32_b)
char* int32_result;
char* int32_a;
char* int32_b;
{
/*
Takes two 32bit values, stored as 4 bytes each -
adds and stores the result.
Returns 0 on success, 1 on error or overflow.
*/
int sum;
char i;
char carry;
carry = 0x00;
/* loop over each byte of the 4byte array */
for (i = 4; i != 0; i--) {
/* sum the two 1 byte numbers as a 2 byte int */
sum = int32_a[i-1] + int32_b[i-1] + carry;
/* would integer overflow occur with this sum? */
if (sum > 0x00ff) {
/* store the most significant byte for next loop */
carry = (sum >> 8);
} else {
/* no carry needed */
carry = 0x00
}
/* store the least significant byte */
int32_result[i+1] = (sum & 0xff);
}
/* Has overflow occured (ie number > 32bit) */
if (carry != 0) {
return 1;
} else {
return 0;
}
}
EDIT 2: Here's an updated and tested version of the emulated 32bit + 32bit integer add code. It works with all values I've tried so far. Overflow for values bigger than a 32bit unsigned integer is not handled (will not be required for my purposes):
add_int32(int32_result, int32_a, int32_b)
char* int32_result;
char* int32_a;
char* int32_b;
{
/*
Takes two 32bit values, stored as 4 bytes each -
adds and stores the result.
Returns 0 on success, 1 on error or overflow.
*/
int sum;
char i, pos;
char carry;
zero_int32(int32_result);
carry = 0x00;
/* loop over each byte of the 4byte array from lsb to msb */
for (i = 1; i < 5; i++) {
pos = 4 - i;
/* sum the two 1 byte numbers as a 2 byte int */
sum = int32_a[pos] + int32_b[pos] + carry;
/* would integer overflow occur with this sum? */
if (sum > 0x00ff) {
/* store the most significant byte for next loop */
carry = (sum >> 8);
} else {
/* no carry needed */
carry = 0x00;
}
/* store the least significant byte */
int32_result[pos] = (sum & 0x00ff);
}
/* Has overflow occured (ie number > 32bit) */
if (carry != 0) {
return 1;
} else {
return 0;
}
}
I also found some references to 32bit arithmetic on some PIC controllers after searching SO a bit more:
http://web.media.mit.edu/~stefanm/yano/picc_Math32.html
Although there is some PIC assembly inline in their add/subtract code, there are some useful platform agnostic char-based C functions there that have already implemented shifts, comparisons, increment/decrement etc, which will be very useful. I will look into subtract and multiply next - thanks for the info; I guess I was looking at things and thinking they were much harder than they needed to be.
I know you know how to do this. go back to your grade school math...
When you multiply to numbers, base 10
12
x34
====
You do four multiplications right and then add four numbers together right?
4x2 = 8
4x1 = 4
3x2 = 6
3x1 = 3
then
12
x34
====
0008
0040
0060
+0300
======
Now what about addition
12
+34
===
We learned to break that down into two additions
2+4 = 6 carry a zero
1+3+carryin of 0 = 4
With that knowledge that you already have from childhood, you then simply apply that. remember that basic math works whether we have 2 digits operated on 2 digits or 2 million digits operated on 2 million digits.
The above uses single decimal numbers, but the math works if it were single base 16 numbers or single bits or octal or bytes, etc.
Your C compiler should already be handling these things for you but if you need to synthesize them you can, multiplication at the easiest form for digital is to use bits.
addition is easier with bytes using assembly because the carry out is right there, C does not have a carry out so you have to do the exercise of figuring out the carry out using 8 bit math (it can be determined) without needing a 9th bit. or you can just do something less than 8 bit math, 7 or 4 or whatever.
As Joachim pointed out, this topic hsa been beat to death decades/centuries ago. At the same time it is so simple that it often doesnt warrent a lot of discussion. StackOverflow certainly has this topic covered several times over.

Bitwise memmove

What is the best way to implement a bitwise memmove? The method should take an additional destination and source bit-offset and the count should be in bits too.
I saw that ARM provides a non-standard _membitmove, which does exactly what I need, but I couldn't find its source.
Bind's bitset includes isc_bitstring_copy, but it's not efficient
I'm aware that the C standard library doesn't provide such a method, but I also couldn't find any third-party code providing a similar method.
Assuming "best" means "easiest", you can copy bits one by one. Conceptually, an address of a bit is an object (struct) that has a pointer to a byte in memory and an index of a bit in the byte.
struct pointer_to_bit
{
uint8_t* p;
int b;
};
void membitmovebl(
void *dest,
const void *src,
int dest_offset,
int src_offset,
size_t nbits)
{
// Create pointers to bits
struct pointer_to_bit d = {dest, dest_offset};
struct pointer_to_bit s = {src, src_offset};
// Bring the bit offsets to range (0...7)
d.p += d.b / 8; // replace division by right-shift if bit offset can be negative
d.b %= 8; // replace "%=8" by "&=7" if bit offset can be negative
s.p += s.b / 8;
s.b %= 8;
// Determine whether it's OK to loop forward
if (d.p < s.p || d.p == s.p && d.b <= s.b)
{
// Copy bits one by one
for (size_t i = 0; i < nbits; i++)
{
// Read 1 bit
int bit = (*s.p >> s.b) & 1;
// Write 1 bit
*d.p &= ~(1 << d.b);
*d.p |= bit << d.b;
// Advance pointers
if (++s.b == 8)
{
s.b = 0;
++s.p;
}
if (++d.b == 8)
{
d.b = 0;
++d.p;
}
}
}
else
{
// Copy stuff backwards - essentially the same code but ++ replaced by --
}
}
If you want to write a version optimized for speed, you will have to do copying by bytes (or, better, words), unroll loops, and handle a number of special cases (memmove does that; you will have to do more because your function is more complicated).
P.S. Oh, seeing that you call isc_bitstring_copy inefficient, you probably want the speed optimization. You can use the following idea:
Start copying bits individually until the destination is byte-aligned (d.b == 0). Then, it is easy to copy 8 bits at once, doing some bit twiddling. Do this until there are less than 8 bits left to copy; then continue copying bits one by one.
// Copy 8 bits from s to d and advance pointers
*d.p = *s.p++ >> s.b;
*d.p++ |= *s.p << (8 - s.b);
P.P.S Oh, and seeing your comment on what you are going to use the code for, you don't really need to implement all the versions (byte/halfword/word, big/little-endian); you only want the easiest one - the one working with words (uint32_t).
Here is a partial implementation (not tested). There are obvious efficiency and usability improvements.
Copy n bytes from src to dest (not overlapping src), and shift bits at dest rightwards by bit bits, 0 <= bit <= 7. This assumes that the least significant bits are at the right of the bytes
void memcpy_with_bitshift(unsigned char *dest, unsigned char *src, size_t n, int bit)
{
int i;
memcpy(dest, src, n);
for (i = 0; i < n; i++) {
dest[i] >> bit;
}
for (i = 0; i < n; i++) {
dest[i+1] |= (src[i] << (8 - bit));
}
}
Some improvements to be made:
Don't overwrite first bit bits at beginning of dest.
Merge loops
Have a way to copy a number of bits not divisible by 8
Fix for >8 bits in a char

Reading characters on a bit level

I would like to be able to enter a character from the keyboard and display the binary code for said key in the format 00000001 for example.
Furthermore i would also like to read the bits in a way that allows me to output if they are true or false.
e.g.
01010101 = false,true,false,true,false,true,false,true
I would post an idea of how i have tried to do it myself but I have absolutely no idea, i'm still experimenting with C and this is my first taste of programming at such a low level scale.
Thankyou
For bit tweaking, it is often safer to use unsigned types, because shifts of signed negative values have an implementation-dependent effect. The plain char can be either signed or unsigned (traditionally, it is unsigned on MacIntosh platforms, but signed on PC). Hence, first cast you character into the unsigned char type.
Then, your friends are the bitwise boolean operators (&, |, ^ and ~) and the shift operators (<< and >>). For instance, if your character is in variable x, then to get the 5th bit you simply use: ((x >> 5) & 1). The shift operators moves the value towards the right, dropping the five lower bits and moving the bit your are interested in the "lowest position" (aka "rightmost"). The bitwise AND with 1 simply sets all other bits to 0, so the resulting value is either 0 or 1, which is your bit. Note here that I number bits from left significant (rightmost) to most significant (leftmost) and I begin with zero, not one.
If you assume that your characters are 8-bits, you could write your code as:
unsigned char x = (unsigned char)your_character;
int i;
for (i = 7; i >= 0; i --) {
if (i != 7)
printf(",");
printf("%s", ((x >> i) & 1) ? "true" : "false");
}
You may note that since I number bits from right to left, but you want output from left to right, the loop index must be decreasing.
Note that according to the C standard, unsigned char has at least eight bits but may have more (nowadays, only a handful of embedded DSP have characters which are not 8-bit). To be extra safe, add this near the beginning of your code (as a top-level declaration):
#include <limits.h>
#if CHAR_BIT != 8
#error I need 8-bit bytes!
#endif
This will prevent successful compilation if the target system happens to be one of those special embedded DSP. As a note on the note, the term "byte" in the C standard means "the elementary memory unit which correspond to an unsigned char", so that, in C-speak, a byte may have more than eight bits (a byte is not always an octet). This is a traditional source of confusion.
This is probably not the safest way - no sanity/size/type checks - but it should still work.
unsigned char myBools[8];
char myChar;
// get your character - this is not safe and you should
// use a better method to obtain input...
// cin >> myChar; <- C++
scanf("%c", &myChar);
// binary AND against each bit in the char and then
// cast the result. anything > 0 should resolve to 'true'
// and == 0 to 'false', but you could add a '> 1' check to be sure.
for(int i = 0; i < 8; ++i)
{
myBools[i] = ( (myChar & (1 << i) > 0) ? 1 : 0 );
}
This will give you an array of unsigned chars - either 0 or 1 (true or false) - for the character.
This code is C89:
/* we need this to use exit */
#include <stdlib.h>
/* we need this to use CHAR_BIT */
#include <limits.h>
/* we need this to use fgetc and printf */
#include <stdio.h>
int main() {
/* Declare everything we need */
int input, index;
unsigned int mask;
char inputchar;
/* an array to store integers telling us the values of the individual bits.
There are (almost) always 8 bits in a char, but it doesn't hurt to get into
good habits early, and in C, the sizes of the basic types are different
on different platforms. CHAR_BIT tells us the number of bits in a byte.
*/
int bits[CHAR_BIT];
/* the simplest way to read a single character is fgetc, but note that
the user will probably have to press "return", since input is generally
buffered */
input = fgetc(stdin);
printf("%d\n", input);
/* Check for errors. In C, we must always check for errors */
if (input == EOF) {
printf("No character read\n");
exit(1);
}
/* convert the value read from type int to type char. Not strictly needed,
we can examine the bits of an int or a char, but here's how it's done.
*/
inputchar = input;
/* the most common way to examine individual bits in a value is to use a
"mask" - in this case we have just 1 bit set, the most significant bit
of a char. */
mask = 1 << (CHAR_BIT - 1);
/* this is a loop, index takes each value from 0 to CHAR_BIT-1 in turn,
and we will read the bits from most significant to least significant. */
for (index = 0; index < CHAR_BIT; ++index) {
/* the bitwise-and operator & is how we use the mask.
"inputchar & mask" will be 0 if the bit corresponding to the mask
is 0, and non-zero if the bit is 1. ?: is the ternary conditional
operator, and in C when you use an integer value in a boolean context,
non-zero values are true. So we're converting any non-zero value to 1.
*/
bits[index] = (inputchar & mask) ? 1 : 0;
/* output what we've done */
printf("index %d, value %u\n", index, inputchar & mask);
/* we need a new mask for the next bit */
mask = mask >> 1;
}
/* output each bit as 0 or 1 */
for (index = 0; index < CHAR_BIT; ++index) {
printf("%d", bits[index]);
}
printf("\n");
/* output each bit as "true" or "false" */
for (index = 0; index < CHAR_BIT; ++index) {
printf(bits[index] ? "true" : "false");
/* fiddly part - we want a comma between each bit, but not at the end */
if (index != CHAR_BIT - 1) printf(",");
}
printf("\n");
return 0;
}
You don't necessarily need three loops - you could combine them together if you wanted, and if you're only doing one of the two kinds of output, then you wouldn't need the array, you could just use each bit value as you mask it off. But I think this keeps things separate and hopefully easier to understand.

Resources