C not calculating bitwise operations properly - c

I have a list of 16-bit (unsigned shorts) blocks which I receive from another part of program which I want to write to a binary file. I am trying to split these into 8-bit characters so I devised this method:
int blocks = 1;
unsigned short stuffToWrite[blocks];
stuffToWrite[0] = 0b0100101011110111;
int charsPerBlock = sizeof(unsigned short) / sizeof(char);
char charsToWrite[blocks * charsPerBlock];
for (int i = 0; i < blocks; i++) {
for (int j = 0; j < charsPerBlock; j++) {
charsToWrite[i*charsPerBlock+j] = (stuffToWrite[i] & ((((int) pow(2, sizeof(char)*8)-1) << ((charsPerBlock-1)*sizeof(char)*8)) >> (j*sizeof(char)*8))) >> ((charsPerBlock-j-1)*sizeof(char)*8);
// Testing line //
printf("%u = (%u & %u) >> %lu\n", charsToWrite[i*charsPerBlock+j], stuffToWrite[i], (((int) pow(2, sizeof(char)*8)-1) << ((charsPerBlock-1)*sizeof(char)*8)) >> j*sizeof(char)*8, (charsPerBlock-j-1)*sizeof(char)*8);
}
This looks quite complicated but it's not really. I am just creating a 8 bit mask ((int) pow(2, sizeof(char)*8)-1)), shifting it to the first bits (<< ((charsPerBlock-1)*sizeof(char)*8))) then shifting it back into the right position (>> (j*sizeof(char)*8)))). I then AND this with the original block and shift it back to the lowest bits to become a character (>> ((charsPerBlock-j-1)*sizeof(char)*8)).
This works fine for the first block, giving me 74 or 01001010. However, the second block gives me a very large incorrect number. I am not sure what was going wrong here as I am pretty sure my method works well so I wrote up a test line to break up the steps which yielded the following:
74 = (19191 & 65280) >> 8
4294967287 = (19191 & 255) >> 0
Which is obviously wrong.
I would appreciate any help identifying the error and how to fix it (also if there is a better way to do this).

This looks quite complicated but it's not really.
It is. If you are just looking to serialize data into bytes, then all you need is this:
int blocks = 1;
uint16_t stuffToWrite[blocks] = { 0b0100101011110111 };
char charsToWrite[blocks * sizeof(uint16_t)] =
{
(stuffToWrite[0] >> 8) & 0xFFu,
(stuffToWrite[0] ) & 0xFFu,
};
This works regardless of endianess of the incoming data, but it assumes that the output data has the most significant byte first.
Similarly for a 32 bit number you'd go >> 24, >> 16, >> 8 and >> 0.
Also please note that when doing bitwise arithmetic, never use the following types: signed types, char, floating point.

Related

How to take to nibbles from a byte of data that are chars into two bytes stored in another variable in order to unmask

I have two uint_16ts that I've gotten from the nibbles of my data as seen in the code below. I need to put that into mask set so that each of the nibbles are now in its own byte. The code I got is below I can't for the life of me figure it out. I need to do this cause this will create the mask I will use to unmask mildly encrypted data.
uint16_t length = *(offset+start) - 3;
uint16_t mask = length;
if(Fmt == ENCRYPTED) {
char frstDig = (length & 0x000F)
char scndDig = (length & 0x00F0) >> 4;
mask =
Shift one of the digits by 8 bits, and OR them together.
mask = (scndDig << 8) | frstDig;

Encoding number through editing last bit of array elements

I'm trying to write a simple encoding program in C and I definitely have something wrong with my bitwise operations so I tried to write a simplified version to fix that mistake - so far it's still not working. I have an encoding and decoding method where, given a "key" I encode one number by hiding bits of it in a large array of unsigned ints.
I hide it by using srand(key) (so that I can generate the same numbers afterword with the same key) choosing array elements and then taking one bit of number (iterating through all) and swapping the least significant bit of array element for the bit coming through the number.
In decode method I try to reverse the steps, get all the bits from array elements back and glue them together to get back the original number.
That's the code I have so far:
unsigned int * encode(unsigned int * original_array, char * message, unsigned int mSize, unsigned int secret) {//disregard message, that's the later part, for now just encoding mSize - size of message
int size = MAX; //amount of elementas in array, max defined at top
int i, j, tmp;
unsigned int *array;
srand(secret); //seed rand with the given key
array = (unsigned int *)malloc(MAX*sizeof(unsigned int));
//copy to array from im
for (i=0; i<MAX; i++){
array[i] = original_array[i];
}
//encode message length first. it's a unsigned int therefore it's size is 4 bytes - 32 bits.
for (i=0; i<32; i++){
tmp = rand() % size;
if (((mSize >> i) & 1)==1) //check if the bit is 1
array[tmp] = (1 << 0) | array[tmp]; // then write 1 as last bit
else //else bit is 0
array[tmp] = array[tmp] & (~(1 << 0)); //write 0 as last bit
}
return array;
}
unsigned int decode(unsigned int * im, unsigned int secret) {
char * message;
int i, tmp;
unsigned int result = 2;
int size = MAX;
srand(secret);
for (i=0; i<32; i++){
tmp = rand() % size;
if (((im[tmp] << 0) & 1)==1)
result = (1 >> i) | result;
else
result = result & (~(1 >> i));
}//last
return result;
}
However running it and trying to print decoded result will give me 2, which is the dummy value I gave to result in decode() -therefore I know that at least my method of recovering the changed bits is clearly not working. Unfortunately since decoding is not working, I have no idea if encoding actually works and I can't seem to pinpoint the mistake.
I'm trying to understand how the hiding of such bits works since, ultimately, I want to hide entire message in a slightly more complicated structure then array, but first I wanted to get it working on a simpler level since I have troubles working with bitwise operators.
Edit: Through some debugging I think the encoding function works correctly - or at least does seem to change the array elements by one sometimes which would indicate flipping one bit if conditions are met.
Decoding doesn't seem to affect result variable at all - it doesn't change thgoughour all the bitwise operations and I don't know why.
The main bit of the encode function is the following which is same as your original just tidied it up a little by removing the uncessary 0 shifts and brackets:
//encode message length first. it's a unsigned int therefore it's size is 4 bytes - 32 bits.
for (i=0; i<32; i++){
tmp = rand() % size;
if (((mSize >> i) & 1)==1) //check if the bit is 1
array[tmp] |= 1; // then write 1 as last bit
else //else bit is 0
array[tmp] &= ~1; //write 0 as last bit
}
The problem you have is when you either set the last bit to 1 or 0 then you effectively lose information. There is no way of telling what the original last bit was. And so you will not be able to decode or reverse it.
In short the decode function will never work. As the encode function is not invertible.
EDIT
Following on from your comment. I would say the following about the decode function (again tidied up this should be the same as the original):
unsigned int decode(unsigned int * im, unsigned int secret) {
char * message;
int i, tmp;
unsigned int result = 2;
int size = MAX;
srand(secret);
for (i=0; i<32; i++){
tmp = rand() % size;
if ((im[tmp] & 1)==1)
result |= 1 >> i;
else
result &= ~(1 >> i);
}//last
return result;
}
The thing to note here is that for all values of i > 0 the following will apply:
1 >> i
is the same as
0
This means that for majority of your loop the code will be doing the following
if ((im[tmp] & 1)==1)
result |= 0;
else
result &= ~0;
And since 2 = 2 | 0 and 2 = 2 & ~0 then regardless of which branch of the if is executed the result will always be 2. This would be the same for any even number.
When i = 0 then the following is the case:
if ((im[tmp] & 1)==1)
result |= 1;
else
result &= ~1;
And so since 2 | 1 = 3 and 2 & ~1 = 2 your decode function will only ever return 2 or occasionally 3.

CRC-15 giving wrong values

I am trying to create a CRC-15 check in c and the output is never correct for each line of the file. I am trying to output the CRC for each line cumulatively next to each line. I use: #define POLYNOMIAL 0xA053 for the divisor and text for the dividend. I need to represent numbers as 32-bit unsigned integers. I have tried printing out the hex values to keep track and flipping different shifts around. However, I just can't seem to figure it out! I have a feeling it has something to do with the way I am padding things. Is there a flaw to my logic?
The CRC is to be represented in four hexadecimal numbers, that sequence will have four leading 0's. For example, it will look like 0000xxxx where the x's are the hexadecimal digits. The polynomial I use is 0xA053.
I thought about using a temp variable and do 4 16 bit chunks of code per line every XOR, however, I'm not quite sure how I could use shifts to accomplish this so I settled for a checksum of the letters on the line and then XORing that to try to calculate the CRC code.
I am testing my code using the following input and padding with . until the string is of length 504 because that is what the pad character needs to be via the requirements given:
"This is the lesson: never give in, never give in, never, never, never, never - in nothing, great or small, large or petty - never give in except to convictions of honor and good sense. Never yield to force; never yield to the apparently overwhelming might of the enemy."
The CRC of the first 64 char line ("This is the lesson: never give in, never give in, never, never,) is supposed to be 000015fa and I am getting bfe6ec00.
My logic:
In CRCCalculation I add each character to a 32-bit unsigned integer and after 64 (the length of one line) I send it into the XOR function.
If it the top bit is not 1, I shift the number to the left one
causing 0s to pad the right and loop around again.
If the top bit is 1, I XOR the dividend with the divisor and then shift the dividend to the left one.
After all calculations are done, I return the dividend shifted to the left four ( to add four zeros to the front) to the calculation function
Add result to the running total of the result
Code:
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <stdint.h>
#include <ctype.h>
#define POLYNOMIAL 0xA053
void crcCalculation(char *text, int length)
{
int i;
uint32_t dividend = atoi(text);
uint32_t result;
uint32_t sumText = 0;
// Calculate CRC
printf("\nCRC 15 calculation progress:\n");
i = length;
// padding
if(i < 504)
{
for(; i!=504; i++)
{
// printf("i is %d\n", i);
text[i] = '.';
}
}
// Try calculating by first line of crc by summing the values then calcuating, then add in the next line
for (i = 0; i < 504; i++)
{
if(i%64 == 0 && i != 0)
{
result = XOR(POLYNOMIAL, sumText);
printf(" - %x\n",result);
}
sumText +=(uint32_t)text[i];
printf("%c", text[i]);
}
printf("\n\nCRC15 result : %x\n", result);
}
uint32_t XOR(uint32_t divisor, uint32_t dividend)
{
uint32_t divRemainder = dividend;
uint32_t currentBit;
// Note: 4 16 bit chunks
for(currentBit = 32; currentBit > 0; --currentBit)
{
// if topbit is 1
if(divRemainder & 0x80)
{
//divRemainder = (divRemainder << 1) ^ divisor;
divRemainder ^= divisor;
printf("%x %x\n", divRemainder, divisor);
}
// else
// divisor = divisor >> 1;
divRemainder = (divRemainder << 1);
}
//return divRemainder; , have tried shifting to right and left, want to add 4 zeros to front so >>
//return divRemainder >> 4;
return divRemainder >> 4;
}
The first issue I see is the top bit check, it should be:
if(divRemainder & 0x8000)
The question doesn't state if the CRC is bit reflected (xor data into low order bits of CRC, right shift for cycle) or not (xor data into high order bits of CRC, left shift for cycle), so I can't offer help for the rest of the code.
The question doesn't state the initial value of CRC (0x0000 or 0x7fff), or if the CRC is post complemented.
The logic for a conventional CRC is:
xor a byte of data into the CRC (upper or lower bits)
cycle the CRC 8 times (or do a table lookup)
After generating the CRC for an entire message, the CRC can be appended to the message. If a CRC is generated for a message with the appended CRC and there are no errors, the CRC will be zero (or a constant value if the CRC is post complemented).
here is a typical CRC16, extracted from: <www8.cs.umu.se/~isak/snippets/crc-16.c>
#define POLY 0x8408
/*
// 16 12 5
// this is the CCITT CRC 16 polynomial X + X + X + 1.
// This works out to be 0x1021, but the way the algorithm works
// lets us use 0x8408 (the reverse of the bit pattern). The high
// bit is always assumed to be set, thus we only use 16 bits to
// represent the 17 bit value.
*/
unsigned short crc16(char *data_p, unsigned short length)
{
unsigned char i;
unsigned int data;
unsigned int crc = 0xffff;
if (length == 0)
return (~crc);
do
{
for (i=0, data=(unsigned int)0xff & *data_p++;
i < 8;
i++, data >>= 1)
{
if ((crc & 0x0001) ^ (data & 0x0001))
crc = (crc >> 1) ^ POLY;
else crc >>= 1;
}
} while (--length);
crc = ~crc;
data = crc;
crc = (crc << 8) | (data >> 8 & 0xff);
return (crc);
}
Since you want to calculate a CRC15 rather than a CRC16, the logic will be more complex as cannot work with whole bytes, so there will be a lot of bit shifting and ANDing to extract the desire 15 bits.
Note: the OP did not mention if the initial value of the CRC is 0x0000 or 0x7FFF, nor if the result is to be complemented, nor certain other criteria, so this posted code can only be a guide.

Turn byte into array of bits? C

I want to read binary file byte at the time and then store bits of that byte into integer array. And similarly I want to write integer array of 1s and 0s (8 of them ) into binary file as bytes?
If you have an array of bytes:
unsigned char bytes[10];
And want to change it into an array of bits:
unsigned char bits[80];
And assuming you have 8 bits per byte, try this:
int i;
for (i=0; i<sizeof(bytes)*8; i++) {
bits[i] = ((1 << (i % 8)) & (bytes[i/8])) >> (i % 8);
}
In this loop, i loops through the total number of bits. The byte that a given bit lives at is i/8, which as integer division rounds down. The position of the bit within a byte is i%8.
First we create a mask for the desired bit:
1 << (i % 8)
Then the desired byte:
bytes[i/8]
Then we perform a logical AND to clear all bits except the one we want.
(1 << (i % 8)) & (bytes[i/8])
Then we shift the result right by the bit position to put the desired bit at the least significant bit. This gives us a value of 1 or 0.
Note also that the arrays in question are unsigned. That is required for the bit shifting to work properly.
To switch back:
int i;
memset(bytes, 0, sizeof(bytes));
for (i=0; i<sizeof(bytes)*8; i++) {
bytes[i/8] |= bits[i] << (i % 8);
}
We start by clearing out the byte array, since we'll be setting each byte one bit at a time.
Then we take the bit in question:
bits[i]
Shift it into its position:
bits[i] << (i % 8)
Then use a logical OR to set the appropriate byte;
A simple C program to do the job on a byte array 'input' of size 'sz' would be:
int i=0,j=0;
unsigned char mask = 0x01u;
for (i=0;i<sz;i++)
for (j=0;j<8;j++)
output[8*i+j]=((unsigned char)input[i] >> j) & (unsigned char)(mask);

Iterate through bits in C

I have a big char *str where the first 8 chars (which equals 64 bits if I'm not wrong), represents a bitmap. Is there any way to iterate through these 8 chars and see which bits are 0? I'm having alot of trouble understanding the concept of bits, as you can't "see" them in the code, so I can't think of any way to do this.
Imagine you have only one byte, a single char my_char. You can test for individual bits using bitwise operators and bit shifts.
unsigned char my_char = 0xAA;
int what_bit_i_am_testing = 0;
while (what_bit_i_am_testing < 8) {
if (my_char & 0x01) {
printf("bit %d is 1\n", what_bit_i_am_testing);
}
else {
printf("bit %d is 0\n", what_bit_i_am_testing);
}
what_bit_i_am_testing++;
my_char = my_char >> 1;
}
The part that must be new to you, is the >> operator. This operator will "insert a zero on the left and push every bit to the right, and the rightmost will be thrown away".
That was not a very technical description for a right bit shift of 1.
Here is a way to iterate over each of the set bits of an unsigned integer (use unsigned rather than signed integers for well-defined behaviour; unsigned of any width should be fine), one bit at a time.
Define the following macros:
#define LSBIT(X) ((X) & (-(X)))
#define CLEARLSBIT(X) ((X) & ((X) - 1))
Then you can use the following idiom to iterate over the set bits, LSbit first:
unsigned temp_bits;
unsigned one_bit;
temp_bits = some_value;
for ( ; temp_bits; temp_bits = CLEARLSBIT(temp_bits) ) {
one_bit = LSBIT(temp_bits);
/* Do something with one_bit */
}
I'm not sure whether this suits your needs. You said you want to check for 0 bits, rather than 1 bits — maybe you could bitwise-invert the initial value. Also for multi-byte values, you could put it in another for loop to process one byte/word at a time.
It's true for little-endian memory architecture:
const int cBitmapSize = 8;
const int cBitsCount = cBitmapSize * 8;
const unsigned char cBitmap[cBitmapSize] = /* some data */;
for(int n = 0; n < cBitsCount; n++)
{
unsigned char Mask = 1 << (n % 8);
if(cBitmap[n / 8] & Mask)
{
// if n'th bit is 1...
}
}
In the C language, chars are 8-bit wide bytes, and in general in computer science, data is organized around bytes as the fundamental unit.
In some cases, such as your problem, data is stored as boolean values in individual bits, so we need a way to determine whether a particular bit in a particular byte is on or off. There is already an SO solution for this explaining how to do bit manipulations in C.
To check a bit, the usual method is to AND it with the bit you want to check:
int isBitSet = bitmap & (1 << bit_position);
If the variable isBitSet is 0 after this operation, then the bit is not set. Any other value indicates that the bit is on.
For one char b you can simply iterate like this :
for (int i=0; i<8; i++) {
printf("This is the %d-th bit : %d\n",i,(b>>i)&1);
}
You can then iterate through the chars as needed.
What you should understand is that you cannot manipulate directly the bits, you can just use some arithmetic properties of number in base 2 to compute numbers that in some way represents some bits you want to know.
How does it work for example ? In a char there is 8 bits. A char can be see as a number written with 8 bits in base 2. If the number in b is b7b6b5b4b3b2b1b0 (each being a digit) then b>>i is b shifted to the right by i positions (in the left 0's are pushed). So, 10110111 >> 2 is 00101101, then the operation &1 isolate the last bit (bitwise and operator).
If you want to iterate through all char.
char *str = "MNO"; // M=01001101, N=01001110, O=01001111
int bit = 0;
for (int x = strlen(str)-1; x > -1; x--){ // Start from O, N, M
printf("Char %c \n", str[x]);
for(int y=0; y<8; y++){ // Iterate though every bit
// Shift bit the the right with y step and mask last position
if( str[x]>>y & 0b00000001 ){
printf("bit %d = 1\n", bit);
}else{
printf("bit %d = 0\n", bit);
}
bit++;
}
}
output
Char O
bit 0 = 1
bit 1 = 1
bit 2 = 1
bit 3 = 1
bit 4 = 0
bit 5 = 0
bit 6 = 1
bit 7 = 0
Char N
bit 8 = 0
bit 9 = 1
bit 10 = 1
...

Resources