I have a function that read a word, bit by bit and change to symbol:
I need help to change it to read every 2 bits and change to symbol.
I don't have an idea for it and I need your help guys
void PrintWeirdBits(word w , char* buf){
word mask = 1<<(BITS_IN_WORD-1);
int i;
for(i=0;i<BITS_IN_WORD;i++){
if(mask & w)
buf[i]='/';
else
buf[i]='.';
mask>>=1;
}
buf[i] = '\0';
}
Needed symbols:
00 - *
01 - #
10 - %
11 - !
Here is my proposal for your issue.
Using a lookup table for the symbol decoding will eliminate the need in if statements.
(I assumed word is an unsigned 16 bits data type)
#define BITS_PER_SIGN 2
#define BITS_PER_SIGN_MSK 3 // decimal 3 is 0b11 in binary --> two bits set
// General define could be:
// ((1u << BITS_PER_SIGN) - 1)
#define INIT_MASK (BITS_PER_SIGN_MSK << (BITS_IN_WORD - BITS_PER_SIGN))
void PrintWeirdBits(word w , char* buf)
{
static const char signs[] = {'*', '#', '%', '!'};
unsigned mask = INIT_MASK;
int i;
int sign_idx;
for(i=0; i < BITS_IN_WORD / BITS_PER_SIGN; i++)
{
// the bits of the sign represent the index in the signs array
// just need to align these bits to start from bit 0
sign_idx = (w & mask) >> (BITS_IN_WORD - (i + 1)*BITS_PER_SIGN);
// store the decoded sign in the buffer
buf[i] = signs[sign_idx];
// update the mask for the next symbol
mask >>= BITS_PER_SIGN;
}
buf[i] = '\0';
}
Here it seems to be working.
With small effort it can be updated to a generic code for any bit width of the symbol as long as it is power of two (1, 2, 4, 8) and smaller that BITS_IN_WORD.
Assuming word is unsigned int or an unsigned integer type.
void PrintWeirdBits(word w , char* buf){
word mask = 3 << (BITS_IN_WORD -2);
int i;
word cmp;
for(i=0;i<BITS_IN_WORD/2;i++){
cmp = (mask & w) >> (BITS_IN_WORD -2 -2i);
if(cmp == 0x00)
{
buf[i]='*';
}
else if (cmp == 0x01)
{
buf[i]='#';
}
else if (cmp == 0x02)
{
buf[i]='%';
}
else
{
buf[i]='!';
}
mask>>=2;
}
buf[i] = '\0';
}
The important part is
cmp = (mask & w) >> (BITS_IN_WORD -2 -2i);
Here mask and the input w is bitwise ANDed and the result is right shifted to get the value in the first two bits. These bits are compared to get the result.
Related
I am reviewing for an exam and have a practice problem that I'm stuck on.
I need to write the function find_sequence(unsigned int num, unsigned int patter) {}.
I have tried comparing num & (pattern << i) == (pattern << i) and other things like that but it keeps saying there is a pattern when there isn't. I see why it is doing that but I can not fix it.
The num I'm using is unsigned int a = 82937 and I'm searching for pattern unsigned int b = 0x05.
Pattern: 00000000000000000000000000000101
Original bitmap: 00000000000000010100001111111001
The code so far:
int find_sequence(unsigned int num, unsigned int pattern)
{
for (int i=0; i<32; i++)
{
if ((num & (pattern << i)) == (pattern << i))
{
return i;
}
}
return -9999;
}
int
main()
{
unsigned int a = 82937;
unsigned int b = 0x05;
printf("Pattern: ");
printBits(b);
printf("\n");
printf("Original bitmap: ");
printBits(a);
printf("\n");
int test = find_sequence(a, b);
printf("%d\n", test);
return 0;
}
Here is what I have so far. This keeps returning 3, and I see why but I do not know how to avoid it.
for (int i=0; i<32; i++)
{
if ((num & (pattern << i)) == (pattern << i))
is bad:
- it works only when pattern consists of 1 entirely
- you generate at the end of the loop pattern << 31 which is 0 when pattern is even. Condition will hold every time then.
Knowing the length of the pattern would simplify the loop above; just go until 32 - size. When not given by the API, the length can be calculated either by a clz() function or manually by looping over the bits.
Now, you can generate the mask as mask = (1u << length) - 1u (note: you have to handle the length == 32 case in a special way) and write
for (int i=0; i < (32 - length); i++)
{
if ((num & (mask << i)) == (pattern << i))
or
for (int i=0; i < (32 - length); i++)
{
if (((num >> i) & mask) == pattern)
((num & (pattern << i)) == (pattern << i)) won't give you the desire results.
Let's say you pattern is 0b101 and the value is 0b1111, then
0101 pattern
1111 value
& ----
0101 pattern
Even though the value has not the pattern 0b101, the check would return true.
You've got to create a mask where all bits of the pattern (until the most
significant bit) are 1 and the rest are 0. So for the pattern 0b101 the mask
must be b111.
So first you need to calculate the position of the most significant bit of the pattern, then create
the mask and then you can apply (bitwise AND) the mask to the value. If the
result is the same as the pattern, then you've found your pattern:
int find_sequence(unsigned int num, unsigned int pattern)
{
unsigned int copy = pattern;
// checking edge cases
if(num == 0 && pattern == 0)
return 0;
if(num == 0)
return -1;
// calculating msb of pattern
int msb = -1;
while(copy)
{
msb++;
copy >>= 1;
}
printf("msb of pattern at pos: %d\n", msb);
// creating mask
unsigned int mask = (1U << msb + 1) - 1;
int pos = 0;
while(num)
{
if((num & mask) == pattern)
return pos;
num >>= 1;
pos++;
}
return -1;
}
Using this function I get the value 14, where your 0b101 pattern is found in
a.
In this case you could make a bitmask that 0's out all the spaces you aren't looking for so in this case
Pattern: 00000000000000000000000000000101
Bitmask: 00000000000000000000000000000111
So in the case of the number you are looking at
Original: 00000000000000010100001111111001
If you and that with this bitmask you end of with
Number after &: 00000000000000000000000000000001
And compare the new number with your pattern to see if equal.
Then >> the original number
Original: 00000000000000010100001111111001
Right shifted: 00000000000000001010000111111100
And repeat the & and compare to check the next 3 numbers in the sequence.
This may be somewhat of an odd question as well as my first one ever on this site and a pretty complicated thing to ask basically I have this decompresser for a very specific archived file, I barely understand this but from what i can grasp its some sort of "bit mask" it reads the first 2 bytes out of target file, and stores them as a sequence.
The first for loop is where I get confused
Say for arguments sake mask is 2 bytes 10 04, or 1040(decimal) thats what it usually is in these files
for (t = 0; t<16; t++) {
if (mask & (1 << (15 - t))) {
This seems to be looping through all 16 bits of those 2 bytes and running an AND operation on mask (1040) on every bit?
The if statement is what I don't understand completely? Whats triggering the if? If the bit is greater then 0?
Because if mask is 1040, then really what were looking at is
if(1040 & 32768) index 15
if(1040 & 16384) index 14
if(1040 & 8192) index 13
if(1040 & 4096) index 12
if(1040 & 2048) index 11
if(1040 & 1024) index 10
if(1040 & 512) and so on.....
if(1040 & 256)
I just really need to know whats triggering this if statement? i think i might be over thinking it, but is it simply trigger if the current bit is greater then 0?
The only other thing i can do is compile this source myself, insert printfs on key variables and go hand in hand with a hex editor and try and figure out whats actually going on here, if anyone could give me a hand would be awesome.
#include <stdlib.h>
#include <stdio.h>
#include <stdint.h>
uint8_t dest[1024 * 1024 * 4]; // holds the actual data
int main(int argc, char *argv[]) {
FILE *fi, *fo;
char fname[255];
uint16_t mask, tmp, offset, length;
uint16_t seq;
uint32_t dptr, sptr;
uint16_t l, ct;
uint16_t t, s;
int test_len;
int t_length, t_off;
// Print Usage if filename is missing
if (argc<3) {
printf("sld_unpack - Decompressor for .sld files ()\nsld_unpack <filename.sld> <filename.t2>\n");
return(-1);
}
// Open .SLD-file
if (!(fi = fopen(argv[1], "rb"))) {
printf("Error opening %s\n", argv[1]);
return(-1);
}
dptr = 0;
fread((uint16_t*)&seq, 1, 2, fi); // read 1st 2 bytes in file
test_len = ftell(fi);
printf("[Main Header sequence: %d]\n 'offset' : %d \n", seq, test_len);
sptr = 0;
fread((uint16_t*)&seq, 1, 2, fi);
while (!feof(fi)) { // while not at the end of the file set mask equal to sequence (first 2 bytes of header)
mask = seq;
// loop through 16 bit mask
for (t = 0; t<16; t++) {
if (mask & (1 << (15 - t))) { // check all bit fields and run AND check to if value greater then 0?
test_len = ftell(fi);
fread((uint16_t*)&seq, 1, 2, fi); // read
sptr = sptr + 2; // set from 0 to 2
tmp = seq; // set tmp to sequence
offset = ((uint32_t)tmp & 0x07ff) * 2;
length = ((tmp >> 11) & 0x1f) * 2; // 32 - 1?
if (length>0) {
for (l = 0; l<length; l++) {
dest[dptr] = dest[dptr - offset];
dptr++;
}
}
else { // if length == 0
t_length = ftell(fi);
fread((uint16_t*)&seq, 1, 2, fi);
sptr = sptr + 2;
length = seq * 2;
for (s = 0; s<length; s++) {
dest[dptr] = dest[dptr - offset];
dptr++;
}
}
}
else { // if sequence AND returns 0 (or less)?
fread((uint16_t*)&seq, 1, 2, fi);
t_length = ftell(fi);
sptr = sptr + 2;
dest[dptr++] = seq & 0xff;
dest[dptr++] = (seq >> 8) & 0xff;
}
}
fread((uint16_t*)&seq, 1, 2, fi);
}
fclose(fi);
sprintf(fname, "%s\0", argv[2]);
if (!(fo = fopen(fname, "wb"))) { // if file
printf("Error creating %s\n", fname);
return(-1);
}
fwrite((uint8_t*)&dest, 1, dptr, fo);
fclose(fo);
printf("Done.\n");
return(0);
}
Be careful here.
for arguments sake mask is 2 bytes 10 04, or 1040(decimal)
That assumption may be nowhere close to true. You need to show how mask is defined, but generally a mask of bytes 10 (00001010) and 40 (00101000) is binary 101000101000 or decimal (2600) not quite 1040.
The general mask of 2600 decimal will match when bits 4,6,10 & 12 are set. Remember a bit mask is nothing more than a number whose binary representation when anded or ored with a second number produces some desired result. Nothing magic about a bit mask, its just a number with the right bits set for your intended purpose.
When you and two numbers together and test, your are testing whether there are common bits set in both numbers. Using the for loop and shift, you are doing a bitwise test for which common bits are set. Using the mask of 2600 with the loop counter will test true when bits 4,6,10 & 12 are set. In other words when the test clause equals 8, 32, 512 or 2048.
The following is a short example of what is happening in the loop and if statements.
#include <stdio.h>
/* BUILD_64 */
#if defined(__LP64__) || defined(_LP64)
# define BUILD_64 1
#endif
/* BITS_PER_LONG */
#ifdef BUILD_64
# define BITS_PER_LONG 64
#else
# define BITS_PER_LONG 32
#endif
/* CHAR_BIT */
#ifndef CHAR_BIT
# define CHAR_BIT 8
#endif
char *binpad (unsigned long n, size_t sz);
int main (void) {
unsigned short t, mask;
mask = (10 << 8) | 40;
printf ("\n mask : %s (%hu)\n\n",
binpad (mask, sizeof mask * CHAR_BIT), mask);
for (t = 0; t<16; t++)
if (mask & (1 << (15 - t)))
printf (" t %2hu : %s (%hu)\n", t,
binpad (mask & (1 << (15 - t)), sizeof mask * CHAR_BIT),
mask & (1 << (15 - t)));
return 0;
}
/** returns pointer to binary representation of 'n' zero padded to 'sz'.
* returns pointer to string contianing binary representation of
* unsigned 64-bit (or less ) value zero padded to 'sz' digits.
*/
char *binpad (unsigned long n, size_t sz)
{
static char s[BITS_PER_LONG + 1] = {0};
char *p = s + BITS_PER_LONG;
register size_t i;
for (i = 0; i < sz; i++)
*--p = (n>>i & 1) ? '1' : '0';
return p;
}
Output
$ ./bin/bitmask1040
mask : 0000101000101000 (2600)
t 4 : 0000100000000000 (2048)
t 6 : 0000001000000000 (512)
t 10 : 0000000000100000 (32)
t 12 : 0000000000001000 (8)
The if statement is what I don't understand completely? Whats triggering the if? If the bit is greater then 0? ... I just really need to know whats triggering this if statement? i think i might be over thinking it, but is it simply trigger if the current bit is greater then 0?
The C (and C++) if statement "triggers" when the conditional statement evaluates to true, which is any non-zero value; zero equates to false.
Straight C doesn't have a Boolean type, it just use the convention of zero (0) is false, and any other value is true.
if (mask & (1 << (15 - t))) {...}
is the same as
if ((mask & (1 << (15 - t))) != 0) {...}
The expression you gave is only true (non-zero) when there is a bit in the mask in the same position that the 1 was shifted by. i.e. is the 15th bit in the mask set, etc.
N.b.
mask & (1 << (15 - t))
can only ever be 0 or 1 er... will only have one bit set.
As part of a larger problem, I have to take some binary value: 00000000 11011110 (8)
Then, I have to:
Derive the bit count in this function - so I've done that by finding the place of the most sig fig.
Then store the first 6 numbers of this value into the value 128, such that it equals: 10011110
Then store the last 5 numbers of this value into the value 192, such that it equals: 11000011 10011110
The two bytes should be stored in some array, buffer[]
I have written this function however, position does not appear to initialise properly in gdb and the values are not outputting correctly. This is my attempt:
void create_value(unsigned short init_val, unsigned char buffer[])
{
// get the count
int position = 0;
while (init_val >>= 1)
position++;
// get total
int count = position++;
int start = 128;
for (int i = 0; i < 7; i++)
if (((1 << i) & init_val) != 0) start = start | 1 << i;
buffer[0] = start;
start = 192;
for (int i = 7; i < 11; i++) {
if (((1 << i) & init_val) !=0) start = start | 1 << i;
}
buf[1] = start;
}
After
while (init_val >>= 1)
position++;
init_val will be 0. When you later use
if (((1 << i) & init_val) != 0) start = start | 1 << i;
you will never change start.
So, after reading through what you're trying to do (which is pretty confusingly described), why don't you:
void create_value(unsigned short init_value, unsigned char buffer[])
{
buffer[0] = (init_value & 63) | 128;
buffer[1] = ((init_value >> 6) & 31) | 192;
return;
}
What this does: init_value & 63 masks off all but the lowest 6 bits in init_value, as you wanted. The | 128 then sets the most significant bit of the byte (IFF CHAR_BIT == 8, mind you).
(init_value >> 6) shifts init_value down by 6 bits, so now the original bits 6-11 are bits 0-4. & 31 masks off all bit the lowest 5 bits in this value, | 192 sets the two most significant bits.
Given an array,
unsigned char q[32]="1100111...",
how can I generate a 4-bytes bit-set, unsigned char p[4], such that, the bit of this bit-set, equals to value inside the array, e.g., the first byte p[0]= "q[0] ... q[7]"; 2nd byte p[1]="q[8] ... q[15]", etc.
and also how to do it in opposite, i.e., given bit-set, generate the array?
my own trial out for the first part.
unsigned char p[4]={0};
for (int j=0; j<N; j++)
{
if (q[j] == '1')
{
p [j / 8] |= 1 << (7-(j % 8));
}
}
Is the above right? any conditions to check? Is there any better way?
EDIT - 1
I wonder if above is efficient way? As the array size could be upto 4096 or even more.
First, Use strtoul to get a 32-bit value. Then convert the byte order to big-endian with htonl. Finally, store the result in your array:
#include <arpa/inet.h>
#include <stdlib.h>
/* ... */
unsigned char q[32] = "1100111...";
unsigned char result[4] = {0};
*(unsigned long*)result = htonl(strtoul(q, NULL, 2));
There are other ways as well.
But I lack <arpa/inet.h>!
Then you need to know what byte order your platform is. If it's big endian, then htonl does nothing and can be omitted. If it's little-endian, then htonl is just:
unsigned long htonl(unsigned long x)
{
x = (x & 0xFF00FF00) >> 8) | (x & 0x00FF00FF) << 8);
x = (x & 0xFFFF0000) >> 16) | (x & 0x0000FFFF) << 16);
return x;
}
If you're lucky, your optimizer might see what you're doing and make it into efficient code. If not, well, at least it's all implementable in registers and O(log N).
If you don't know what byte order your platform is, then you need to detect it:
typedef union {
char c[sizeof(int) / sizeof(char)];
int i;
} OrderTest;
unsigned long htonl(unsigned long x)
{
OrderTest test;
test.i = 1;
if(!test.c[0])
return x;
x = (x & 0xFF00FF00) >> 8) | (x & 0x00FF00FF) << 8);
x = (x & 0xFFFF0000) >> 16) | (x & 0x0000FFFF) << 16);
return x;
}
Maybe long is 8 bytes!
Well, the OP implied 4-byte inputs with their array size, but 8-byte long is doable:
#define kCharsPerLong (sizeof(long) / sizeof(char))
unsigned char q[8 * kCharsPerLong] = "1100111...";
unsigned char result[kCharsPerLong] = {0};
*(unsigned long*)result = htonl(strtoul(q, NULL, 2));
unsigned long htonl(unsigned long x)
{
#if kCharsPerLong == 4
x = (x & 0xFF00FF00UL) >> 8) | (x & 0x00FF00FFUL) << 8);
x = (x & 0xFFFF0000UL) >> 16) | (x & 0x0000FFFFUL) << 16);
#elif kCharsPerLong == 8
x = (x & 0xFF00FF00FF00FF00UL) >> 8) | (x & 0x00FF00FF00FF00FFUL) << 8);
x = (x & 0xFFFF0000FFFF0000UL) >> 16) | (x & 0x0000FFFF0000FFFFUL) << 16);
x = (x & 0xFFFFFFFF00000000UL) >> 32) | (x & 0x00000000FFFFFFFFUL) << 32);
#else
#error Unsupported word size.
#endif
return x;
}
For char that isn't 8 bits (DSPs like to do this), you're on your own. (This is why it was a Big Deal when the SHARC series of DSPs had 8-bit bytes; it made it a LOT easier to port existing code because, face it, C does a horrible job of portability support.)
What about arbitrary length buffers? No funny pointer typecasts, please.
The main thing that can be improved with the OP's version is to rethink the loop's internals. Instead of thinking of the output bytes as a fixed data register, think of it as a shift register, where each successive bit is shifted into the right (LSB) end. This will save you from all those divisions and mods (which, hopefully, are optimized away to bit shifts).
For sanity, I'm ditching unsigned char for uint8_t.
#include <stdint.h>
unsigned StringToBits(const char* inChars, uint8_t* outBytes, size_t numBytes,
size_t* bytesRead)
/* Converts the string of '1' and '0' characters in `inChars` to a buffer of
* bytes in `outBytes`. `numBytes` is the number of available bytes in the
* `outBytes` buffer. On exit, if `bytesRead` is not NULL, the value it points
* to is set to the number of bytes read (rounding up to the nearest full
* byte). If a multiple of 8 bits is not read, the last byte written will be
* padded with 0 bits to reach a multiple of 8 bits. This function returns the
* number of padding bits that were added. For example, an input of 11 bits
* will result `bytesRead` being set to 2 and the function will return 5. This
* means that if a nonzero value is returned, then a partial byte was read,
* which may be an error.
*/
{ size_t bytes = 0;
unsigned bits = 0;
uint8_t x = 0;
while(bytes < numBytes)
{ /* Parse a character. */
switch(*inChars++)
{ '0': x <<= 1; ++bits; break;
'1': x = (x << 1) | 1; ++bits; break;
default: numBytes = 0;
}
/* See if we filled a byte. */
if(bits == 8)
{ outBytes[bytes++] = x;
x = 0;
bits = 0;
}
}
/* Padding, if needed. */
if(bits)
{ bits = 8 - bits;
outBytes[bytes++] = x << bits;
}
/* Finish up. */
if(bytesRead)
*bytesRead = bytes;
return bits;
}
It's your responsibility to make sure inChars is null-terminated. The function will return on the first non-'0' or '1' character it sees or if it runs out of output buffer. Some example usage:
unsigned char q[32] = "1100111...";
uint8_t buf[4];
size_t bytesRead = 5;
if(StringToBits(q, buf, 4, &bytesRead) || bytesRead != 4)
{
/* Partial read; handle error here. */
}
This just reads 4 bytes, and traps the error if it can't.
unsigned char q[4096] = "1100111...";
uint8_t buf[512];
StringToBits(q, buf, 512, NULL);
This just converts what it can and sets the rest to 0 bits.
This function could be done better if C had the ability to break out of more than one level of loop or switch; as it stands, I'd have to add a flag value to get the same effect, which is clutter, or I'd have to add a goto, which I simply refuse.
I don't think that will quite work. You are comparing each "bit" to 1 when it should really be '1'. You can also make it a bit more efficient by getting rid of the if:
unsigned char p[4]={0};
for (int j=0; j<32; j++)
{
p [j / 8] |= (q[j] == `1`) << (7-(j % 8));
}
Going in reverse is pretty simple too. Just mask for each "bit" that you set earlier.
unsigned char q[32]={0};
for (int j=0; j<32; j++) {
q[j] = p[j / 8] & ( 1 << (7-(j % 8)) ) + '0';
}
You'll notice the creative use of (boolean) + '0' to convert between 1/0 and '1'/'0'.
According to your example it does not look like you are going for readability, and after a (late) refresh my solution looks very similar to Chriszuma except for the lack of parenthesis due to order of operations and the addition of the !! to enforce a 0 or 1.
const size_t N = 32; //N must be a multiple of 8
unsigned char q[N+1] = "11011101001001101001111110000111";
unsigned char p[N/8] = {0};
unsigned char r[N+1] = {0}; //reversed
for(size_t i = 0; i < N; ++i)
p[i / 8] |= (q[i] == '1') << 7 - i % 8;
for(size_t i = 0; i < N; ++i)
r[i] = '0' + !!(p[i / 8] & 1 << 7 - i % 8);
printf("%x %x %x %x\n", p[0], p[1], p[2], p[3]);
printf("%s\n%s\n", q,r);
If you are looking for extreme efficiency, try to use the following techniques:
Replace if by subtraction of '0' (seems like you can assume your input symbols can be only 0 or 1).
Also process the input from lower indices to higher ones.
for (int c = 0; c < N; c += 8)
{
int y = 0;
for (int b = 0; b < 8; ++b)
y = y * 2 + q[c + b] - '0';
p[c / 8] = y;
}
Replace array indices by auto-incrementing pointers:
const char* qptr = q;
unsigned char* pptr = p;
for (int c = 0; c < N; c += 8)
{
int y = 0;
for (int b = 0; b < 8; ++b)
y = y * 2 + *qptr++ - '0';
*pptr++ = y;
}
Unroll the inner loop:
const char* qptr = q;
unsigned char* pptr = p;
for (int c = 0; c < N; c += 8)
{
*pptr++ =
qptr[0] - '0' << 7 |
qptr[1] - '0' << 6 |
qptr[2] - '0' << 5 |
qptr[3] - '0' << 4 |
qptr[4] - '0' << 3 |
qptr[5] - '0' << 2 |
qptr[6] - '0' << 1 |
qptr[7] - '0' << 0;
qptr += 8;
}
Process several input characters simultaneously (using bit twiddling hacks or MMX instructions) - this has great speedup potential!
I'm trying to simply convert a byte received from fget into binary.
I know the value of the first byte was 49 based on printing the value. I now need to convert this into its binary value.
unsigned char byte = 49;// Read from file
unsigned char mask = 1; // Bit mask
unsigned char bits[8];
// Extract the bits
for (int i = 0; i < 8; i++) {
// Mask each bit in the byte and store it
bits[i] = byte & (mask << i);
}
// For debug purposes, lets print the received data
for (int i = 0; i < 8; i++) {
printf("Bit: %d\n",bits[i]);
}
This will print:
Bit: 1
Bit: 0
Bit: 0
Bit: 0
Bit: 16
Bit: 32
Bit: 0
Bit: 0
Press any key to continue . . .
Clearly, this is not a binary value. Any help?
The problem you're having is that your assignment isn't resulting in a true or false value.
bits[i] = byte & (mask << i);
This gets the value of the bit. You need to see if the bit is on or off, like this:
bits[i] = (byte & (mask << i)) != 0;
Change
bits[i] = byte & (mask << i);
to
bits[i] = (byte >> i) & mask;
or
bits[i] = (byte >> i) & 1;
or
bits[i] = byte & 1;
byte >>= 1;
One way, among many:
#include <stdio.h>
#include <limits.h>
int main(void) {
int i;
char bits[CHAR_BIT + 1];
unsigned char value = 47;
for (i = CHAR_BIT - 1; i >= 0; i -= 1) {
bits[i] = '0' + (value & 0x01);
value >>= 1;
}
bits[CHAR_BIT] = 0;
puts(bits);
return 0;
}
You may notice that your output has a couple 1's and 0's, but also powers of 2, such as 32. This is because after you isolate the bit you want using the mask, you still have to bit-shift it into the least-significant digit so that it shows up as a 1. Or you could use what other posts suggested, and instead of bit-shifting the result (something like 00001000 for example), you could simply use (result != 0) to fetch either a 1 or 0, since in C, false is 0, and comparisons such as != will return 1 as true (I think).
#include<Stdio.h>
#include <limits.h>
void main(void) {
unsigned char byte = 49;// Read from file
unsigned char mask = 1; // Bit mask
unsigned char bits[8];
int i, j = CHAR_BIT-1;
// Extract the bits
for ( i = 0; i < 8; i++,j--,mask = 1) {
// Mask each bit in the byte and store it
bits[i] =( byte & (mask<<=j)) != NULL;
}
// For debug purposes, lets print the received data
for (int i = 0; i < 8; i++) {
printf("%d", bits[i]);
}
puts("");
}
This addition in place of that will work:
bits[i]= byte & (mask << i);
bits[i] >>=i;