I am reading the code that involves some bitwise operations as shown below:
unsigned char data = 0;
unsigned char status = 0;
//DAQmx functions for reading data
DAQmxReadDigitalLines(taskHandleIn,1,10.0,DAQmx_Val_GroupByChannel,dataIn,8,&read,&bytesPerSamp,NULL);
DAQmxReadDigitalLines(taskHandleOut,1,10.0,DAQmx_Val_GroupByChannel,dataOutRead,8,&read,&bytesPerSamp,NULL);
for (int i = 0; i < 8; i++)
{
if (dataOutRead[i] == 1)
data = data | (0x01 << i);
else
data = data & ~(0x01 << i);
}
for (int i = 0; i < 4; i++)
{
if (dataIn[i] == 1)
status = status | (0x01 << (7 - i));
else
status = status & ~(0x01 << (7 - i));
}
ctrl = 0;
In the above codes, dataOutRead and dataIn are both uInt8 8-element arrays originally initialized to zero.
I don't quite understand what the code is actually doing? Anyone can walk me through these codes?
Key part of understanding this code is the conditional with a bitwise operation inside:
if(dataOutRead[i]==1) {
data = data | (0x01 << i);
} else {
data = data & ~(0x01 << i);
}
It uses bytes of dataOutRead as a sequence of ones and not ones (presumably, but not necessarily, zeros). This sequence is "masked" into bits of data starting with the least significant one:
When dataOutRead[i] is 1, the corresponding bit is set
When dataOutRead[i] is not 1, the corresponding bit is cleared. This step is unnecessary, because data is zeroed out before entering the loop.
This could be thought of as converting a "byte-encoded-binary" (one byte per bit) into its corresponding binary number.
The second loop does the same thing with reversed bits, processing only the lower four bits, and sticking them into the upper nibble of the data byte in reverse order.
It's hard to speculate on the purpose of this approach, but it could be useful in applications that use arrays of full-byte Booleans to control the state of some hardware register, e.g. in a microcontroller.
Well the first loop is creating an unsigned char same as that of dataOutRead - Replicating whatever there is in dataOutRead to data. This one checks whether the ith bit is set/reset - and based on that it sets or resets in data.
Second loop does the same but with 4 least significant bits and copies whatever is there in most signigficant bits of status (Bit 7 to 4) from dataIn (but in reverse manner). To clarify further:-
7 6 5 4 3 2 1 0
x y z w w z y x
If in the second case 2 bit is set/reset then 5 bit of status is being set/reset.
Related
I'm trying to write a simple encoding program in C and I definitely have something wrong with my bitwise operations so I tried to write a simplified version to fix that mistake - so far it's still not working. I have an encoding and decoding method where, given a "key" I encode one number by hiding bits of it in a large array of unsigned ints.
I hide it by using srand(key) (so that I can generate the same numbers afterword with the same key) choosing array elements and then taking one bit of number (iterating through all) and swapping the least significant bit of array element for the bit coming through the number.
In decode method I try to reverse the steps, get all the bits from array elements back and glue them together to get back the original number.
That's the code I have so far:
unsigned int * encode(unsigned int * original_array, char * message, unsigned int mSize, unsigned int secret) {//disregard message, that's the later part, for now just encoding mSize - size of message
int size = MAX; //amount of elementas in array, max defined at top
int i, j, tmp;
unsigned int *array;
srand(secret); //seed rand with the given key
array = (unsigned int *)malloc(MAX*sizeof(unsigned int));
//copy to array from im
for (i=0; i<MAX; i++){
array[i] = original_array[i];
}
//encode message length first. it's a unsigned int therefore it's size is 4 bytes - 32 bits.
for (i=0; i<32; i++){
tmp = rand() % size;
if (((mSize >> i) & 1)==1) //check if the bit is 1
array[tmp] = (1 << 0) | array[tmp]; // then write 1 as last bit
else //else bit is 0
array[tmp] = array[tmp] & (~(1 << 0)); //write 0 as last bit
}
return array;
}
unsigned int decode(unsigned int * im, unsigned int secret) {
char * message;
int i, tmp;
unsigned int result = 2;
int size = MAX;
srand(secret);
for (i=0; i<32; i++){
tmp = rand() % size;
if (((im[tmp] << 0) & 1)==1)
result = (1 >> i) | result;
else
result = result & (~(1 >> i));
}//last
return result;
}
However running it and trying to print decoded result will give me 2, which is the dummy value I gave to result in decode() -therefore I know that at least my method of recovering the changed bits is clearly not working. Unfortunately since decoding is not working, I have no idea if encoding actually works and I can't seem to pinpoint the mistake.
I'm trying to understand how the hiding of such bits works since, ultimately, I want to hide entire message in a slightly more complicated structure then array, but first I wanted to get it working on a simpler level since I have troubles working with bitwise operators.
Edit: Through some debugging I think the encoding function works correctly - or at least does seem to change the array elements by one sometimes which would indicate flipping one bit if conditions are met.
Decoding doesn't seem to affect result variable at all - it doesn't change thgoughour all the bitwise operations and I don't know why.
The main bit of the encode function is the following which is same as your original just tidied it up a little by removing the uncessary 0 shifts and brackets:
//encode message length first. it's a unsigned int therefore it's size is 4 bytes - 32 bits.
for (i=0; i<32; i++){
tmp = rand() % size;
if (((mSize >> i) & 1)==1) //check if the bit is 1
array[tmp] |= 1; // then write 1 as last bit
else //else bit is 0
array[tmp] &= ~1; //write 0 as last bit
}
The problem you have is when you either set the last bit to 1 or 0 then you effectively lose information. There is no way of telling what the original last bit was. And so you will not be able to decode or reverse it.
In short the decode function will never work. As the encode function is not invertible.
EDIT
Following on from your comment. I would say the following about the decode function (again tidied up this should be the same as the original):
unsigned int decode(unsigned int * im, unsigned int secret) {
char * message;
int i, tmp;
unsigned int result = 2;
int size = MAX;
srand(secret);
for (i=0; i<32; i++){
tmp = rand() % size;
if ((im[tmp] & 1)==1)
result |= 1 >> i;
else
result &= ~(1 >> i);
}//last
return result;
}
The thing to note here is that for all values of i > 0 the following will apply:
1 >> i
is the same as
0
This means that for majority of your loop the code will be doing the following
if ((im[tmp] & 1)==1)
result |= 0;
else
result &= ~0;
And since 2 = 2 | 0 and 2 = 2 & ~0 then regardless of which branch of the if is executed the result will always be 2. This would be the same for any even number.
When i = 0 then the following is the case:
if ((im[tmp] & 1)==1)
result |= 1;
else
result &= ~1;
And so since 2 | 1 = 3 and 2 & ~1 = 2 your decode function will only ever return 2 or occasionally 3.
My data is 12-bit and is stored in an array of 16-bit values. They're just values < 4095.
I need to output the 12-bit data in 8-bit chunks; top row are the 12 bits of the input values, bottom row is the 8 bits of the output values.
11|10|09|08|07|06|05|04|03|02|01|00|11|10|09|08|07|06|05|04|03|02|01|00
07|06|05|04|03|02|01|00|07|06|05|04|03|02|01|00|07|06|05|04|03|02|01|00
So for the output array:
The first byte contains the first 8 bits of the first 12-bit value.
The second byte contains the last 4 bits of the first 12-bit value and the first 4 bits of the second 12-bit value.
The third byte contains the last 8 bits of the second value.
And so on...
So ideally I want to turn an array of 12-bit numbers stored in a 16-bit array into an 8-bit array where the values are contiguous.
Technically it doesn't have to be out as an 8-bit array, I can output the 8-bit values through a function SPI.Transfer(byte) as I step through the 16-bit array.
So this works; maybe there is a more elegant way?
byteCount = 0;
for (int i = 0; i < numberOf16bitItems; i++) {
if (i % 2) {
//if odd then just grab the last 8 bits
toArray[byteCount] = fromArray[i] & 0b11111111;
byteCount++;
} else {
// if even get the first 8 bits and output, plus get the last 4 bits and the next bits of the next item
toArray[byteCount] = (fromArray[i] >> 4) & 0b11111111;
byteCount++;
toArray[byteCount] = ((fromArray[i+1] >> 8) & 0b1111) | (fromArray[i] & 0b1111 << 4);
byteCount++;
}
}
You need to specify endianness and the terms first and last. First from MSBit or LSBit? One way or another there will be lots of shifting, oring, anding in bussiness. And your result data will be as in source format, endianness and first/last point of view.
Assuming first 8 bits are starting from most significant bit:
for(i = 0; i < Lenght_of_16_Bit_Array; i++)
{
if(i % 2 == 0)
i8 = a16[i] >> 4; // takes first 8-bit and chops least 4-bits
else
i8 = a16[i-1] << 4 | a16[i] >> 8 // takes least 4-bits of prior item and chops 8-bits of current item.
a8[i] = i8;
}
// last control if your initial length is an odd number(means the index is even)
if(Lenght_of_16_Bit_Array % 2 = 1)
a8[Lenght_of_16_Bit_Array] = a16[Lenght_of_16_Bit_Array - 1] & 0xF; //recover that last 4 bits
I have a big char *str where the first 8 chars (which equals 64 bits if I'm not wrong), represents a bitmap. Is there any way to iterate through these 8 chars and see which bits are 0? I'm having alot of trouble understanding the concept of bits, as you can't "see" them in the code, so I can't think of any way to do this.
Imagine you have only one byte, a single char my_char. You can test for individual bits using bitwise operators and bit shifts.
unsigned char my_char = 0xAA;
int what_bit_i_am_testing = 0;
while (what_bit_i_am_testing < 8) {
if (my_char & 0x01) {
printf("bit %d is 1\n", what_bit_i_am_testing);
}
else {
printf("bit %d is 0\n", what_bit_i_am_testing);
}
what_bit_i_am_testing++;
my_char = my_char >> 1;
}
The part that must be new to you, is the >> operator. This operator will "insert a zero on the left and push every bit to the right, and the rightmost will be thrown away".
That was not a very technical description for a right bit shift of 1.
Here is a way to iterate over each of the set bits of an unsigned integer (use unsigned rather than signed integers for well-defined behaviour; unsigned of any width should be fine), one bit at a time.
Define the following macros:
#define LSBIT(X) ((X) & (-(X)))
#define CLEARLSBIT(X) ((X) & ((X) - 1))
Then you can use the following idiom to iterate over the set bits, LSbit first:
unsigned temp_bits;
unsigned one_bit;
temp_bits = some_value;
for ( ; temp_bits; temp_bits = CLEARLSBIT(temp_bits) ) {
one_bit = LSBIT(temp_bits);
/* Do something with one_bit */
}
I'm not sure whether this suits your needs. You said you want to check for 0 bits, rather than 1 bits — maybe you could bitwise-invert the initial value. Also for multi-byte values, you could put it in another for loop to process one byte/word at a time.
It's true for little-endian memory architecture:
const int cBitmapSize = 8;
const int cBitsCount = cBitmapSize * 8;
const unsigned char cBitmap[cBitmapSize] = /* some data */;
for(int n = 0; n < cBitsCount; n++)
{
unsigned char Mask = 1 << (n % 8);
if(cBitmap[n / 8] & Mask)
{
// if n'th bit is 1...
}
}
In the C language, chars are 8-bit wide bytes, and in general in computer science, data is organized around bytes as the fundamental unit.
In some cases, such as your problem, data is stored as boolean values in individual bits, so we need a way to determine whether a particular bit in a particular byte is on or off. There is already an SO solution for this explaining how to do bit manipulations in C.
To check a bit, the usual method is to AND it with the bit you want to check:
int isBitSet = bitmap & (1 << bit_position);
If the variable isBitSet is 0 after this operation, then the bit is not set. Any other value indicates that the bit is on.
For one char b you can simply iterate like this :
for (int i=0; i<8; i++) {
printf("This is the %d-th bit : %d\n",i,(b>>i)&1);
}
You can then iterate through the chars as needed.
What you should understand is that you cannot manipulate directly the bits, you can just use some arithmetic properties of number in base 2 to compute numbers that in some way represents some bits you want to know.
How does it work for example ? In a char there is 8 bits. A char can be see as a number written with 8 bits in base 2. If the number in b is b7b6b5b4b3b2b1b0 (each being a digit) then b>>i is b shifted to the right by i positions (in the left 0's are pushed). So, 10110111 >> 2 is 00101101, then the operation &1 isolate the last bit (bitwise and operator).
If you want to iterate through all char.
char *str = "MNO"; // M=01001101, N=01001110, O=01001111
int bit = 0;
for (int x = strlen(str)-1; x > -1; x--){ // Start from O, N, M
printf("Char %c \n", str[x]);
for(int y=0; y<8; y++){ // Iterate though every bit
// Shift bit the the right with y step and mask last position
if( str[x]>>y & 0b00000001 ){
printf("bit %d = 1\n", bit);
}else{
printf("bit %d = 0\n", bit);
}
bit++;
}
}
output
Char O
bit 0 = 1
bit 1 = 1
bit 2 = 1
bit 3 = 1
bit 4 = 0
bit 5 = 0
bit 6 = 1
bit 7 = 0
Char N
bit 8 = 0
bit 9 = 1
bit 10 = 1
...
I need to decompress a binary file. Since the binary file is encoded in 14 bits, I have to read 14 bits instead 8 bits to decode. But as far as I know using getc() to read the file only give me 8 bits each time. Are there any efficient way to achieve this? Below is a block of code which can do the job but it seems not that efficient, how can I improve it?
unsigned int input_code(FILE *input)
{
unsigned int return_value;
static int input_bit_count=0;
static unsigned long input_bit_buffer=0L;
while (input_bit_count <= 24)
{
input_bit_buffer |=
(unsigned long) getc(input) << (24-input_bit_count);
input_bit_count += 8;
}
return_value=input_bit_buffer >> (32-BITS);
input_bit_buffer <<= BITS;
input_bit_count -= BITS;
return(return_value);
}
Generally speaking, you should avoid read data in such small quantities because it's inefficient, although the buffering code inside the standard library and the O/S will make up for that.
A better reason would be that it can result in weird and unnatural code. Why not read 112 bits = 14 bytes at a time - that's a multiple of 8 and a multiple of 14. You can then treat the resulting buffer as 8 14-bit pieces of data. So things work out nicely.
But, if you absolutely must read as few bytes as possible at a time, read 16 bits, then eat (i.e. process) 14 of those, read another 16, combine them with the 2 you already read, eat 14, and repeat this process. For a hint on how you can do this sort of thing, check out base64 encoders/decoders.
An overhead of a couple of instructions per input/output char or int is most likely going to be negligible. Don't try optimizing this piece of code until and unless you identify a bottleneck here.
Further, if I were you, I'd check the value returned by getc(). It can return EOF instead of data.
Also, strictly speaking, char (or C's byte) has CHAR_BIT bits in it, which can be greater than 8.
You cannot read less than one byte at a time. However you can use bitmasks and shift operations to set the last two bits to 0 (if you are storing 16), and carry the two unused bits you removed for the next value. This will probably made the decoding operation a lot more complicated and expensive though.
How about decoding the values 8 by 8 (you can read 14 chars = 112 bits = 8 * 14 bits)? I have NOT tested this code, and there are probably some typos in there. It does compile but i don't have your file to test it:
#include <stdio.h>
int main(){
FILE *file = fopen ("...", "rt");
// loop variable
unsigned int i;
// temporary buffer
char buffer[14];
// your decoded ints
int decoded[8];
while(fgets(buffer, 14, file) != NULL) {
int cursor = 0;
// we do this loop only twice since the offset resets after 4 * 14
for(i = 0; i <= 4; i+= 4){
// first decoded int is 16 bits
decoded[i+0] = (buffer[cursor++] | (buffer[cursor++] << 8));
// second is 2 + 8 + 8 = 18 bits (offset = 2)
decoded[i+1] = (decoded[i+0] >> 14) | buffer[cursor++] << 2 | buffer[cursor++] << 10;
// third is 4 + 8 + 8 = 20 bits (offset = 4)
decoded[i+2] = (decoded[i+1] >> 14) | buffer[cursor++] << 4 | buffer[cursor++] << 12;
// next is 6 + 8 = 14 bits (offset = 6)
decoded[i+3] = (decoded[i+2] >> 14) | buffer[cursor++] << 6;
}
// trim the numbers to 14 bits
for(i = 0; i < 8; ++i)
decoded[i] &= ((1 << 15) - 1);
}
fclose(file);
}
Note that I don't do anything with the decoded ints, and I write on the same array over and over again, this is just an illustration. You can factorize the code more but I unrolled the loops and commented the operations so that you see how it works.
Current direction:
Start with and unsigned char which is 1 Byte on my system using sizeof. Range is 0-255.
If length is the number of bits I need then elements is the number of elements (bytes) I need in my array.
constant unsigned int elements = length/8 + (length % y > 0 ? 1 : 0);
unsigned char bit_arr[elements];
Now I add basic functionality such as set, unset, and test. Where j is the bit per byte index, i is the byte index and h = bit index. We have i = h / 8 and j = i % 8.
Psuedo-Code :
bit_arr[i] |= (1 << j); // Set
bit_arr[i] &= ~(1 << j); // Unset
if( bit_arr[i] & (1 << j) ) // Test
Looks like you have a very good idea of what needs to be done. Though instead of pow(2, j), use 1 << j. You also need to change your test code. You don't want the test to do an assignment to the array.
pow() will give you floating-point values, which you don't want. At all. It might work for you, as you use powers of two, but it can get weird as j gets bigger.
You'd do a bit better to use 1 << j instead. Removes any chance of float weirdness, and it probably performs better, too.