unknown parse error when testing function - c

I have the function below which seems to act as it should, however when running the program through a testing program, I am given the errors: parse error: [int xSwapped = ((255 << nShift) | (255 << mShift));] and
undeclared variable `xSwapped': [return (~xSwapped & x) | nMask | mMask;]
int dl15(int x, int n, int m){
// calculates shifts, create mask to shift, combine result
// get number of bytes needed to shift, multiplying by 8
// get Masks by shifting 0xff and shift amount
// shift bits to required position
// combine results
int nShift = n<< 3;
int mShift = m<< 3;
int nMask = x & (255 << nShift);
int mMask = x & (255 << mShift);
nMask = 255 & (nMask >> nShift);
mMask = 255 & (mMask >> mShift);
nMask = nMask << mShift;
mMask = mMask << nShift;
int xSwapped = ((255 << nShift) | (255 << mShift));
return (~xSwapped & x) | nMask | mMask;
}
Not certain what im missing, thank you.

It looks like you are using a C compiler set to an old C standard. Prior to C99 you could not put executable statements before declarations.
You can fix this by moving the declaration of xSwapped to the top:
int nShift = n<< 3;
int mShift = m<< 3;
int nMask = x & (255 << nShift);
int mMask = x & (255 << mShift);
int xSwapped; // Declaration
nMask = 255 & (nMask >> nShift);
mMask = 255 & (mMask >> mShift);
nMask = nMask << mShift;
mMask = mMask << nShift;
xSwapped = ((255 << nShift) | (255 << mShift)); // Assignment
return (~xSwapped & x) | nMask | mMask;

Related

A variable value is different when it is printed after structural assignment and when printed without the structural assignment

I do not know how to put an appropriate title to explain the problem. Thus feel free if you have a more informative title to edit.
To understand the problem, let me explain what I am doing.
I have created a structure as following:
typedef union __attribute__ ((__packed__)) adcs_measurements_t
{
unsigned char raw[72];
struct __attribute__ ((__packed__)) //191
{
int magneticFieldX : 16;
int magneticFieldY : 16;
int magneticFieldZ : 16;
int coarseSunX : 16;
int coarseSunY : 16;
int coarseSunZ : 16;
int sunX : 16;
int sunY : 16;
int sunZ : 16;
int nadirX : 16;
int nadirY : 16;
int nadirZ : 16;
int XAngularRate : 16;
int YAngularRate : 16;
int ZAngularRate : 16;
int XWheelSpeed : 16;
int YWheelSpeed : 16;
int ZWheelSpeed : 16;
int star1BX : 16;
int star1BY : 16;
int star1BZ : 16;
int star1OX : 16;
int star1OY : 16;
int star1OZ : 16;
int star2BX : 16;
int star2BY : 16;
int star2BZ : 16;
int star2OX : 16;
int star2OY : 16;
int star2OZ : 16;
int star3BX : 16;
int star3BY : 16;
int star3BZ : 16;
int star3OX : 16;
int star3OY : 16;
int star3OZ : 16;
} fields;
} adcs_measurements_t;
I populate the structure by calling a function as following:
void adcsTM191_measurements(adcs_measurements_t* dataOut)
{
int pass;
unsigned char TMID = 191;
unsigned char readBuff[72] = {0};
pass = I2C_write(ADCS_ADDR, &TMID, 1);
if(pass != 0)
{
printf("write error %d\n", pass);
}
pass = I2C_read(ADCS_ADDR, readBuff, 72);
if(pass != 0)
{
printf("read error %d\n", pass);
}
dataOut->fields.magneticFieldX = (readBuff[1] & 0x00FF) << 8 | (readBuff[0] & 0x00FF);
dataOut->fields.magneticFieldY = (readBuff[3] & 0x00FF) << 8 | (readBuff[2] & 0x00FF);
dataOut->fields.magneticFieldZ = (readBuff[5] & 0x00FF) << 8 | (readBuff[4] & 0x00FF);
dataOut->fields.coarseSunX = (readBuff[7] & 0x00FF) << 8 | (readBuff[6] & 0x00FF);
dataOut->fields.coarseSunY = (readBuff[9] & 0x00FF) << 8 | (readBuff[8] & 0x00FF);
dataOut->fields.coarseSunZ = (readBuff[11] & 0x00FF) << 8 | (readBuff[10] & 0x00FF);
dataOut->fields.sunX = (readBuff[13] & 0x00FF) << 8 | (readBuff[12] & 0x00FF);
dataOut->fields.sunY = (readBuff[15] & 0x00FF) << 8 | (readBuff[14] & 0x00FF);
dataOut->fields.sunZ = (readBuff[17] & 0x00FF) << 8 | (readBuff[16] & 0x00FF);
dataOut->fields.nadirX = (readBuff[19] & 0x00FF) << 8 | (readBuff[18] & 0x00FF);
dataOut->fields.nadirY = (readBuff[21] & 0x00FF) << 8 | (readBuff[20] & 0x00FF);
dataOut->fields.nadirZ = (readBuff[23] & 0x00FF) << 8 | (readBuff[22] & 0x00FF);
dataOut->fields.XAngularRate = (readBuff[25] & 0x00FF) << 8 | (readBuff[24] & 0x00FF);
dataOut->fields.YAngularRate = (readBuff[27] & 0x00FF) << 8 | (readBuff[26] & 0x00FF);
dataOut->fields.ZAngularRate = (readBuff[29] & 0x00FF) << 8 | (readBuff[28] & 0x00FF);
dataOut->fields.XWheelSpeed = (readBuff[31] & 0x00FF) << 8 | (readBuff[30] & 0x00FF);
dataOut->fields.YWheelSpeed = (readBuff[33] & 0x00FF) << 8 | (readBuff[32] & 0x00FF);
dataOut->fields.ZWheelSpeed = (readBuff[35] & 0x00FF) << 8 | (readBuff[34] & 0x00FF);
dataOut->fields.star1BX = (readBuff[37] & 0x00FF) << 8 | (readBuff[36] & 0x00FF);
dataOut->fields.star1BY = (readBuff[39] & 0x00FF) << 8 | (readBuff[38] & 0x00FF);
dataOut->fields.star1BZ = (readBuff[41] & 0x00FF) << 8 | (readBuff[40] & 0x00FF);
dataOut->fields.star1OX = (readBuff[43] & 0x00FF) << 8 | (readBuff[42] & 0x00FF);
dataOut->fields.star1OY = (readBuff[45] & 0x00FF) << 8 | (readBuff[44] & 0x00FF);
dataOut->fields.star1OZ = (readBuff[47] & 0x00FF) << 8 | (readBuff[46] & 0x00FF);
dataOut->fields.star2BX = (readBuff[49] & 0x00FF) << 8 | (readBuff[48] & 0x00FF);
dataOut->fields.star2BY = (readBuff[51] & 0x00FF) << 8 | (readBuff[50] & 0x00FF);
dataOut->fields.star2BZ = (readBuff[53] & 0x00FF) << 8 | (readBuff[52] & 0x00FF);
dataOut->fields.star2OX = (readBuff[55] & 0x00FF) << 8 | (readBuff[54] & 0x00FF);
dataOut->fields.star2OY = (readBuff[57] & 0x00FF) << 8 | (readBuff[56] & 0x00FF);
dataOut->fields.star2OZ = (readBuff[59] & 0x00FF) << 8 | (readBuff[58] & 0x00FF);
dataOut->fields.star3BX = (readBuff[61] & 0x00FF) << 8 | (readBuff[60] & 0x00FF);
dataOut->fields.star3BY = (readBuff[63] & 0x00FF) << 8 | (readBuff[62] & 0x00FF);
dataOut->fields.star3BZ = (readBuff[65] & 0x00FF) << 8 | (readBuff[64] & 0x00FF);
dataOut->fields.star3OX = (readBuff[67] & 0x00FF) << 8 | (readBuff[66] & 0x00FF);
dataOut->fields.star3OY = (readBuff[69] & 0x00FF) << 8 | (readBuff[68] & 0x00FF);
dataOut->fields.star3OZ = (readBuff[71] & 0x00FF) << 8 | (readBuff[70] & 0x00FF);
}
Finally I print, for instance YWheelSpeed.
adcsTM191_measurements(&temp);
printf("structure y wheel speed is: %d \n", temp.fields.YWheelSpeed);
This value should print a negative value and it does:
structure y wheel speed is: -97
Now here is the thing, if I print (readBuff[27] & 0x00FF) << 8 | (readBuff[26] & 0x00FF), which corresponds to what was populated inside the Y wheel speed variable, anywhere inside adcsTM191_measurements(adcs_measurements_t* dataOut) it does not print this negative value. Rather it prints the maximum value of an unsigned char (65,535‬).
int y = (int) (readBuff[33] & 0x00FF) << 8 | (readBuff[32] & 0x00FF);
printf("inside struct y is: %d", y);
I am expecting that storing inside the structure does a kind of implicit cast and so it prints the negative value as expected. How is it doing it? How can I print the correct value without the use of the structure?
According to C 2018 footnote 128, it is implementation-defined whether a bit-field defined with int, as in int YWheelSpeed is signed or unsigned. Since your implementation is showing a negative value for it, presumably it is signed, and therefore, as a 16-bit signed integer, it can represent values from −32,768 to 32,767.
We can also deduce that int in your implementation is more than 16 bits, likely 32 bits (from the fact that “65535” is printed in one case when int y is printed with “%d”).
Consider this assignment:
dataOut->fields.YWheelSpeed = (readBuff[33] & 0x00FF) << 8 | (readBuff[32] & 0x00FF);`
In this expression, readBuff[33] and readBuff[32] are converted to int by the usual promotions. 0x00FF is also an int.
If we suppose readBuff[33] is 255 and readBufff[32] is 159 (which is 28−97), then the value of the expression on the right side of the = is 65,439 (which is 216−97). In an assignment, the right operand is converted to the type of the left operand, which is a 16-bit signed integer. In this case, the value, 65,439, cannot be represented in a 16-bit signed integer. C 2018 6.3.1.3 3 tells us “either the result is implementation-defined or an implementation-defined signal is raised.”
A common implementation of this conversion is to produce the result modulo 216 or, equivalently, to reinterpret the 16 low bits of the int as a two’s complement 16-bit integer. This produces −97. Since your implementation subsequently showed −97 for the value, presumably this is what your implementation did.
Thus, dataOut->fields.YWheelSpeed is assigned the value −97. When it is later printed with:
printf("structure y wheel speed is: %d \n", temp.fields.YWheelSpeed);
then the default argument promotions, which include the usual integer promotions, convert temp.fields.YWheelSpeed from a signed 16-bit integer with value −97 to an int with value −97, and “-97” is printed.
In contrast, suppose (readBuff[33] & 0x00FF) << 8 | (readBuff[32] & 0x00FF) is printed with %d. As we saw above, the value of this expression is 65,439, so “65439” should be printed.
The question states:
Now here is the thing, if I print (readBuff[27] & 0x00FF) << 8 | (readBuff[26] & 0x00FF), which corresponds to what was populated inside the Y wheel speed variable,… it prints the maximum value of an unsigned char (65,535‬).
However, (readBuff[27] & 0x00FF) << 8 | (readBuff[26] & 0x00FF) is not the value that was assigned to YWheelSpeed, which is presumably the “Y wheel speed variable”. YWheelSpeed was assigned from readBuff elements 32 and 33, not 26 and 27. Thus we should not be surprised that some different value is printed rather than 65,439.
You probably have 32-bit int, so the initialization never sets the sign bit. But the structure field is only 16 bits, and will be sign-extended when it's converted to int for the printf() call.

Morton Reverse Encoding for a 3D grid

I have a 3D grid/array say u[nx+2][ny+2][nz+2]. The trailing +2 corresponds to two layers of halo cells in each of the three dimension x,y,z. I have another grid which allows for refinement(using quadtree) hence I have the morton index (or the Z order) of each of the cells.
Lets say without refinement the two grids are alike in the physical reality(except the second code doesnt have halo cells), What I want to find is for a cell q with morton id mid what is the corresponding index i , j and k index in the 3D grid. Basically a decoding of the mid or Z-order to get corresponding i,j,k for u matrix.
Looking for a C solution but general comments in any other programming language is also OK.
For forward encoding I am following the magic bits method as shown in
Morton Encoding using different methods
Morton encoding is just interleaving the bits of two or more components.
If we number binary digits in increasing order of significance, so that the least significant binary digit in an unsigned integer is 0 (and binary digit i has value 2i), then binary digit i in component k of N corresponds to binary digit (i N + k) in the Morton code.
Here are two simple functions to encode and decode three-component Morton codes:
#include <stdlib.h>
#include <inttypes.h>
/* This source is in the public domain. */
/* Morton encoding in binary (components 21-bit: 0..2097151)
0zyxzyxzyxzyxzyxzyxzyxzyxzyxzyxzyxzyxzyxzyxzyxzyxzyxzyxzyxzyxzyx */
#define BITMASK_0000000001000001000001000001000001000001000001000001000001000001 UINT64_C(18300341342965825)
#define BITMASK_0000001000001000001000001000001000001000001000001000001000001000 UINT64_C(146402730743726600)
#define BITMASK_0001000000000000000000000000000000000000000000000000000000000000 UINT64_C(1152921504606846976)
/* 0000000ccc0000cc0000cc0000cc0000cc0000cc0000cc0000cc0000cc0000cc */
#define BITMASK_0000000000000011000000000011000000000011000000000011000000000011 UINT64_C(844631138906115)
#define BITMASK_0000000111000000000011000000000011000000000011000000000011000000 UINT64_C(126113986927919296)
/* 00000000000ccccc00000000cccc00000000cccc00000000cccc00000000cccc */
#define BITMASK_0000000000000000000000000000000000001111000000000000000000001111 UINT64_C(251658255)
#define BITMASK_0000000000000000000000001111000000000000000000001111000000000000 UINT64_C(1030792212480)
#define BITMASK_0000000000011111000000000000000000000000000000000000000000000000 UINT64_C(8725724278030336)
/* 000000000000000000000000000ccccccccccccc0000000000000000cccccccc */
#define BITMASK_0000000000000000000000000000000000000000000000000000000011111111 UINT64_C(255)
#define BITMASK_0000000000000000000000000001111111111111000000000000000000000000 UINT64_C(137422176256)
/* ccccccccccccccccccccc */
#define BITMASK_21BITS UINT64_C(2097151)
static inline void morton_decode(uint64_t m, uint32_t *xto, uint32_t *yto, uint32_t *zto)
{
const uint64_t mask0 = BITMASK_0000000001000001000001000001000001000001000001000001000001000001,
mask1 = BITMASK_0000001000001000001000001000001000001000001000001000001000001000,
mask2 = BITMASK_0001000000000000000000000000000000000000000000000000000000000000,
mask3 = BITMASK_0000000000000011000000000011000000000011000000000011000000000011,
mask4 = BITMASK_0000000111000000000011000000000011000000000011000000000011000000,
mask5 = BITMASK_0000000000000000000000000000000000001111000000000000000000001111,
mask6 = BITMASK_0000000000000000000000001111000000000000000000001111000000000000,
mask7 = BITMASK_0000000000011111000000000000000000000000000000000000000000000000,
mask8 = BITMASK_0000000000000000000000000000000000000000000000000000000011111111,
mask9 = BITMASK_0000000000000000000000000001111111111111000000000000000000000000;
uint64_t x = m,
y = m >> 1,
z = m >> 2;
/* 000c00c00c00c00c00c00c00c00c00c00c00c00c00c00c00c00c00c00c00c00c */
x = (x & mask0) | ((x & mask1) >> 2) | ((x & mask2) >> 4);
y = (y & mask0) | ((y & mask1) >> 2) | ((y & mask2) >> 4);
z = (z & mask0) | ((z & mask1) >> 2) | ((z & mask2) >> 4);
/* 0000000ccc0000cc0000cc0000cc0000cc0000cc0000cc0000cc0000cc0000cc */
x = (x & mask3) | ((x & mask4) >> 4);
y = (y & mask3) | ((y & mask4) >> 4);
z = (z & mask3) | ((z & mask4) >> 4);
/* 00000000000ccccc00000000cccc00000000cccc00000000cccc00000000cccc */
x = (x & mask5) | ((x & mask6) >> 8) | ((x & mask7) >> 16);
y = (y & mask5) | ((y & mask6) >> 8) | ((y & mask7) >> 16);
z = (z & mask5) | ((z & mask6) >> 8) | ((z & mask7) >> 16);
/* 000000000000000000000000000ccccccccccccc0000000000000000cccccccc */
x = (x & mask8) | ((x & mask9) >> 16);
y = (y & mask8) | ((y & mask9) >> 16);
z = (z & mask8) | ((z & mask9) >> 16);
/* 0000000000000000000000000000000000000000000ccccccccccccccccccccc */
if (xto) *xto = x;
if (yto) *yto = y;
if (zto) *zto = z;
}
static inline uint64_t morton_encode(uint32_t xsrc, uint32_t ysrc, uint32_t zsrc)
{
const uint64_t mask0 = BITMASK_0000000001000001000001000001000001000001000001000001000001000001,
mask1 = BITMASK_0000001000001000001000001000001000001000001000001000001000001000,
mask2 = BITMASK_0001000000000000000000000000000000000000000000000000000000000000,
mask3 = BITMASK_0000000000000011000000000011000000000011000000000011000000000011,
mask4 = BITMASK_0000000111000000000011000000000011000000000011000000000011000000,
mask5 = BITMASK_0000000000000000000000000000000000001111000000000000000000001111,
mask6 = BITMASK_0000000000000000000000001111000000000000000000001111000000000000,
mask7 = BITMASK_0000000000011111000000000000000000000000000000000000000000000000,
mask8 = BITMASK_0000000000000000000000000000000000000000000000000000000011111111,
mask9 = BITMASK_0000000000000000000000000001111111111111000000000000000000000000;
uint64_t x = xsrc,
y = ysrc,
z = zsrc;
/* 0000000000000000000000000000000000000000000ccccccccccccccccccccc */
x = (x & mask8) | ((x << 16) & mask9);
y = (y & mask8) | ((y << 16) & mask9);
z = (z & mask8) | ((z << 16) & mask9);
/* 000000000000000000000000000ccccccccccccc0000000000000000cccccccc */
x = (x & mask5) | ((x << 8) & mask6) | ((x << 16) & mask7);
y = (y & mask5) | ((y << 8) & mask6) | ((y << 16) & mask7);
z = (z & mask5) | ((z << 8) & mask6) | ((z << 16) & mask7);
/* 00000000000ccccc00000000cccc00000000cccc00000000cccc00000000cccc */
x = (x & mask3) | ((x << 4) & mask4);
y = (y & mask3) | ((y << 4) & mask4);
z = (z & mask3) | ((z << 4) & mask4);
/* 0000000ccc0000cc0000cc0000cc0000cc0000cc0000cc0000cc0000cc0000cc */
x = (x & mask0) | ((x << 2) & mask1) | ((x << 4) & mask2);
y = (y & mask0) | ((y << 2) & mask1) | ((y << 4) & mask2);
z = (z & mask0) | ((z << 2) & mask1) | ((z << 4) & mask2);
/* 000c00c00c00c00c00c00c00c00c00c00c00c00c00c00c00c00c00c00c00c00c */
return x | (y << 1) | (z << 2);
}
The functions work symmetrically. To decode, binary digits and digit groups are shifted to larger consecutive units; to encode, binary digit groups are split and spread by shifting. Examine the masks (the BITMASK_ constants are named after their binary digit pattern), and the shift operations, to understand in detail how the encoding and decoding happens.
While two functions are quite efficient, they are not optimized.
The above functions have been verified tested to work using a few billion round-trips using random 21-bit unsigned integer components: decoding a Morton-encoded value yields the original three components.

right shift count >= width of type or left shift count >= width of type

I am new to bits, I am trying to get 64 bit value send using UDP.
int plugin(unsigned char *Buffer) {
static const uint8_t max_byte = 0xFF;
uint8_t id[8];
id[0] = (uint8_t)((Buffer[0]) & max_byte);
id[1] = (uint8_t)((Buffer[1] >> 8) & max_byte);
id[2] = (uint8_t)((Buffer[2] >> 16) & max_byte);
id[3] = (uint8_t)((Buffer[3] >> 24) & max_byte);
id[4] = (uint8_t)((Buffer[4] >> 32) & max_byte);
id[5] = (uint8_t)((Buffer[5] >> 40) & max_byte);
id[6] = (uint8_t)((Buffer[6] >> 48) & max_byte);
id[7] = (uint8_t)((Buffer[7] >> 56) & max_byte);
}
I am getting error right shift count >= width of type. I tried other way aswell
int plugin(unsigned char *Buffer) {
uint64_t id = (Buffer[0] | Buffer[1] << 8 | Buffer[2] << 16 | Buffer[3] << 24 | Buffer[4] < 32 | Buffer[5] << 40 | Buffer[6] << 48 | Buffer[7] << 56);
printf("ID %" PRIu64 "\n", id);
}
Its getting error left shift count >= width of type
I checked the system it is x86_64. Could someone please tell me the reason why its happening? Please suggest me a way forward.
This happens because of default integer promotion, basically.
When you do this:
uint64_t id = Buffer[7] << 56;
That Buffer[7] is an unsigned char, but it gets promoted to int in the arithmetic expression, and your int is not 64 bits. The type of the left hand side does not automatically "infect" the right hand side, that's just not how C works.
You need to cast:
const uint64_t id = ((uint64_t) Buffer[7]) << 56;
and so on.

How to interleave 2 booleans using bitwise operators?

Suppose I have two 4-bit values, ABCD and abcd. How to interleave it, so it becomes AaBbCcDd, using bitwise operators? Example in pseudo-C:
nibble a = 0b1001;
nibble b = 0b1100;
char c = foo(a,b);
print_bits(c);
// output: 0b11010010
Note: 4 bits is just for illustration, I want to do this with two 32bit ints.
This is called the perfect shuffle operation, and it's discussed at length in the Bible Of Bit Bashing, Hacker's Delight by Henry Warren, section 7-2 "Shuffling Bits."
Assuming x is a 32-bit integer with a in its high-order 16 bits and b in its low-order 16 bits:
unsigned int x = (a << 16) | b; /* put a and b in place */
the following straightforward C-like code accomplishes the perfect shuffle:
x = (x & 0x0000FF00) << 8 | (x >> 8) & 0x0000FF00 | x & 0xFF0000FF;
x = (x & 0x00F000F0) << 4 | (x >> 4) & 0x00F000F0 | x & 0xF00FF00F;
x = (x & 0x0C0C0C0C) << 2 | (x >> 2) & 0x0C0C0C0C | x & 0xC3C3C3C3;
x = (x & 0x22222222) << 1 | (x >> 1) & 0x22222222 | x & 0x99999999;
He also gives an alternative form which is faster on some CPUs, and (I think) a little more clear and extensible:
unsigned int t; /* an intermediate, temporary variable */
t = (x ^ (x >> 8)) & 0x0000FF00; x = x ^ t ^ (t << 8);
t = (x ^ (x >> 4)) & 0x00F000F0; x = x ^ t ^ (t << 4);
t = (x ^ (x >> 2)) & 0x0C0C0C0C; x = x ^ t ^ (t << 2);
t = (x ^ (x >> 1)) & 0x22222222; x = x ^ t ^ (t << 1);
I see you have edited your question to ask for a 64-bit result from two 32-bit inputs. I'd have to think about how to extend Warren's technique. I think it wouldn't be too hard, but I'd have to give it some thought. If someone else wanted to start here and give a 64-bit version, I'd be happy to upvote them.
EDITED FOR 64 BITS
I extended the second solution to 64 bits in a straightforward way. First I doubled the length of each of the constants. Then I added a line at the beginning to swap adjacent double-bytes and intermix them. In the following 4 lines, which are pretty much the same as the 32-bit version, the first line swaps adjacent bytes and intermixes, the second line drops down to nibbles, the third line to double-bits, and the last line to single bits.
unsigned long long int t; /* an intermediate, temporary variable */
t = (x ^ (x >> 16)) & 0x00000000FFFF0000ull; x = x ^ t ^ (t << 16);
t = (x ^ (x >> 8)) & 0x0000FF000000FF00ull; x = x ^ t ^ (t << 8);
t = (x ^ (x >> 4)) & 0x00F000F000F000F0ull; x = x ^ t ^ (t << 4);
t = (x ^ (x >> 2)) & 0x0C0C0C0C0C0C0C0Cull; x = x ^ t ^ (t << 2);
t = (x ^ (x >> 1)) & 0x2222222222222222ull; x = x ^ t ^ (t << 1);
From Stanford "Bit Twiddling Hacks" page:
https://graphics.stanford.edu/~seander/bithacks.html#InterleaveTableObvious
uint32_t x = /*...*/, y = /*...*/;
uint64_t z = 0;
for (int i = 0; i < sizeof(x) * CHAR_BIT; i++) // unroll for more speed...
{
z |= (x & 1U << i) << i | (y & 1U << i) << (i + 1);
}
Look at the page they propose different and faster algorithms to achieve the same.
Like so:
#include <limits.h>
typedef unsigned int half;
typedef unsigned long long full;
full mix_bits(half a,half b)
{
full result = 0;
for (int i=0; i<sizeof(half)*CHAR_BIT; i++)
result |= (((a>>i)&1)<<(2*i+1))|(((b>>i)&1)<<(2*i+0));
return result;
}
Here is a loop-based solution that is hopefully more readable than some of the others already here.
#include <stdint.h>
#include <stdio.h>
#include <stdlib.h>
uint64_t interleave(uint32_t a, uint32_t b) {
uint64_t result = 0;
int i;
for (i = 0; i < 31; i++) {
result |= (a >> (31 - i)) & 1;
result <<= 1;
result |= (b >> (31 - i)) & 1;
result <<= 1;
}
// Skip the last left shift.
result |= (a >> (31 - i)) & 1;
result <<= 1;
result |= (b >> (31 - i)) & 1;
return result;
}
void printBits(uint64_t a) {
int i;
for (i = 0; i < 64; i++)
printf("%lu", (a >> (63 - i)) & 1);
puts("");
}
int main(){
uint32_t a = 0x9;
uint32_t b = 0x6;
uint64_t c = interleave(a,b);
printBits(a);
printBits(b);
printBits(c);
}
I have used the 2 tricks/operations used in this post How do you set, clear, and toggle a single bit? of setting a bit at particular index and checking the bit at particular index.
The following code is implemented using these 2 operations only.
int a = 0b1001;
int b = 0b1100;
long int c=0;
int index; //To specify index of c
int bit,i;
//Set bits in c from right to left.
for(i=32;i>=0;i--)
{
index=2*i+1; //We have to add the bit in c at this index
//Check a
bit=a&(1<<i); //Checking whether the i-th bit is set in a
if(bit)
c|=1<<index; //Setting bit in c at index
index--;
//Check b
bit=b&(1<<i); //Checking whether the i-th bit is set in b
if(bit)
c|=1<<index; //Setting bit in c at index
}
printf("%ld",c);
Output: 210 which is 0b11010010

reverse the bits using bit field in c language?

how to reverse the bits using bit wise operators in c language
Eg:
i/p: 10010101
o/p: 10101001
If it's just 8 bits:
u_char in = 0x95;
u_char out = 0;
for (int i = 0; i < 8; ++i) {
out <<= 1;
out |= (in & 0x01);
in >>= 1;
}
Or for bonus points:
u_char in = 0x95;
u_char out = in;
out = (out & 0xaa) >> 1 | (out & 0x55) << 1;
out = (out & 0xcc) >> 2 | (out & 0x33) << 2;
out = (out & 0xf0) >> 4 | (out & 0x0f) << 4;
figuring out how the last one works is an exercise for the reader ;-)
Knuth has a section on Bit reversal in The Art of Computer Programming Vol 4A, bitwise tricks and techniques.
To reverse the bits of a 32 bit number in a divide and conquer fashion he uses magic constants
u0= 1010101010101010, (from -1/(2+1)
u1= 0011001100110011, (from -1/(4+1)
u2= 0000111100001111, (from -1/(16+1)
u3= 0000000011111111, (from -1/(256+1)
Method credited to Henry Warren Jr., Hackers delight.
unsigned int u0 = 0x55555555;
x = (((x >> 1) & u0) | ((x & u0) << 1));
unsigned int u1 = 0x33333333;
x = (((x >> 2) & u1) | ((x & u1) << 2));
unsigned int u2 = 0x0f0f0f0f;
x = (((x >> 4) & u2) | ((x & u2) << 4));
unsigned int u3 = 0x00ff00ff;
x = (((x >> 8) & u3) | ((x & u3) << 8));
x = ((x >> 16) | (x << 16) mod 0x100000000); // reversed
The 16 and 8 bit cases are left as an exercise to the reader.
Well, this might not be the most elegant solution but it is a solution:
int reverseBits(int x) {
int res = 0;
int len = sizeof(x) * 8; // no of bits to reverse
int i, shift, mask;
for(i = 0; i < len; i++) {
mask = 1 << i; //which bit we are at
shift = len - 2*i - 1;
mask &= x;
mask = (shift > 0) ? mask << shift : mask >> -shift;
res |= mask; // mask the bit we work at at shift it to the left
}
return res;
}
Tested it on a sheet of paper and it seemed to work :D
Edit: Yeah, this is indeed very complicated. I dunno why, but I wanted to find a solution without touching the input, so this came to my haead

Resources