How to perform rotate shift in C [duplicate] - c

This question already has answers here:
Best practices for circular shift (rotate) operations in C++
(16 answers)
Closed 5 years ago.
I have a question as described: how to perform rotate shift in C without embedded assembly. To be more concrete, how to rotate shift a 32-bit int.
I'm now solving this problem with the help of type long long int, but I think it a little bit ugly and wanna know whether there is a more elegant method.
Kind regards.

(warning to future readers): Wikipedia's code produces sub-optimal asm (gcc includes a branch or cmov). See Best practices for circular shift (rotate) operations in C++ for efficient UB-free rotates.
From Wikipedia:
unsigned int _rotl(unsigned int value, int shift) {
if ((shift &= 31) == 0)
return value;
return (value << shift) | (value >> (32 - shift));
}
unsigned int _rotr(unsigned int value, int shift) {
if ((shift &= 31) == 0)
return value;
return (value >> shift) | (value << (32 - shift));
}

This answer is a duplicate of what I posted on Best-practices for compiler-friendly rotates.
See my answer on another question for the full details.
The most compiler-friendly way to express a rotate in C that avoids any Undefined Behaviour seems to be John Regehr's implementation:
uint32_t rotl32 (uint32_t x, unsigned int n)
{
const unsigned int mask = (CHAR_BIT*sizeof(x)-1);
assert ( (n<=mask) &&"rotate by type width or more");
n &= mask; // avoid undef behaviour with NDEBUG. 0 overhead for most types / compilers
return (x<<n) | (x>>( (-n)&mask ));
}
Works for any integer type, not just uint32_t, so you could make multiple versions. This version inlines to a single rol %cl, reg (or rol $imm8, reg) on x86, because the compiler knows that the instruction already has the mask operation built-in.
I would recommend against templating this on the operand type, because you don't want to accidentally do a rotate of the wrong width, when you had a 16bit value stored in an int temporary. Especially since integer-promotion rules can turn the result of an expression involving a narrow unsigned type into and int.
Make sure you use unsigned types for x and the return value, or else it won't be a rotate. (gcc does arithmetic right shifts, shifting in copies of the sign-bit rather than zeroes, leading to a problem when you OR the two shifted values together.)

Though thread is old I wanted to add my two cents to the discussion and propose my solution of the problem. Hope it's worth a look but if I'm wrong correct me please.
When I was looking for efficient and safe way to rotate I was actually surprised that there is no real solution to that. I found few relevant threads here:
https://blog.regehr.org/archives/1063 (Safe, Efficient, and Portable Rotate in C/C++),
Best practices for circular shift (rotate) operations in C++
and wikipedia style (which involves branching but is safe):
uint32_t wikipedia_rotl(uint32_t value, int shift) {
if ((shift &= 31) == 0)
return value;
return (value << shift) | (value >> (32 - shift));
}
After little bit of contemplation I discovered that modulo division would fit the criteria as the resulting reminder is always lower than divisor which perfectly fits the condition of shift<32 without branching.
From mathematical point of view:
∀ x>=0, y: (x mod y) < y
In our case every (x % 32) < 32 which is exactly what we want to achieve. (And yes, I have checked that empirically and it always is <32)
#include <stdint.h>
uint32_t rotl32b_i1m (uint32_t x, uint32_t shift)
{
shift %= 32;
return (x<<shift) | (x>>(32-shift));
}
Additionally mod() will simplify the process because actual rotation of, let's say 100 bits is rotating full 32 bits 3 times, which essentially changes nothing and then 4 bits. So isn't it better to calculate 100%32==4 and rotate 4 bits? It takes single processor operation anyway and brings it to rotation of constant value plus one instruction, ok two as argument has to be taken from stack, but it's still better than branching with if() like in "wikipedia" way.
So, what you guys think of that?

Related

Which is better, double negation or bitshift?

I need a boolean-type function which determines if a bit, in a variable's bit representation, is set or not.
So if the fourth bit of foo is what I want to inspect, I could make the function return
!!(foo & 0x8) //0x8 = 1000b
or
(foo & 0x8) >> 3
to get either 0 or 1.
Which one is more desirable in terms of performance or portability? I'm working on a small embedded system so a little but detectable cost difference still matters.
This solution
return (foo & 0x8) >> 3;
is the worst. If for example the magic constant 0x8 will be changed then you also need to change the magic constant 3. And moreover it can occur such a way that applying the operator >> will be impossible for example when you need to check more than one bit.
If you want to return either 1 (logical true) or 0 (logical false) I think that it will look more clear if to write
return (foo & 0x8) != 0;
or
return (foo & 0x8) == 0x8;
For example if instead of the magic constant 0x8 you will use a named constant (or variable) as for example MASK then this return statement
return ( foo & MASK ) == MASK;
will not depend on the value of MASK.
Pay attention to that these two return statements
return (foo & MASK) != 0;
and
return ( foo & MASK ) == MASK;
are not equivalent. The first return statement means that at least one bit is set in the variable foo while the second return statement means that exactly all bits corresponding to bits in MASK are set.
If the return type of the function is _Bool (or bool defined in <stdbool.h>) and you need to check whether at least one bit is set according to the bit mask then you can just write
return foo & MASK;
Which one is more desirable in terms of performance or portability?
For performance, it all depends on how your compiler optimizes the functions. For example, compiling these two functions:
#include <stdint.h>
uint32_t not_not(uint32_t foo)
{
return !!(foo & 0x8);
}
uint32_t bit_shift(uint32_t foo)
{
return (foo & 0x8) >> 3;
}
Compile to the exact same assembly with x64 GCC 11.1 -O3 (link):
not_not:
mov eax, edi
shr eax, 3
and eax, 1
ret
bit_shift:
mov eax, edi
shr eax, 3
and eax, 1
ret
So you should check the assembly generated by whatever compiler you're using at your preferred optimization level to see if any one of them is faster than the other.
As for portability, considering that in some case you may have to change the bitmask to some other value, !! might be the safer option since changing the bitmask won't force you to change the shift amount as well. You could also use the suggestions Vlad from Moscow suggested.
Both will generate a very similar code so performance-wise it is exactly (or almost exactly) the same.
Double negation was used by the programmers for many years and (IMO) it clearly indicates the programmer intentions. If you have the second shift you need to spend a bit more time reading the code. It is also more error-prone as humans make stupid mistakes (typos for example). Can you easily tell me if (x & 0x8000000000000) >> 52 is correct or not without consideration?
(x & 0x1000000000000) >> 48) is not as clear as !!(x & 0x1000000000000)
Performance-wise there won't be much difference if any. Readability-wise, you should use neither form.
... if the fourth bit of foo is what I want to inspect...
...then you should be using the fourth bit as input. Bit 3 being the fourth bit since bits are counted from 0 and upwards:
uint32_t n = 3;
if(foo & (1u << n))
This is by far the most common way to mask bits in C. If you need the value 1 or 0, then you could use !! or just _Bool b = foo & (1u << n);.
I have no idea about the performance (benchmark maybe?), but here are two more ideas:
(x >> BITNUM) & 1
This uses one bit shift and one binary AND. As a bonus, you specify the NUMBER of the bit you want to see, and no other magical constants. Pretty easy to use.
(x & MASK) != 0
This uses one binary AND and a comparison with zero. AFAIK comparing with zero is a special operation on most processors, so it should be cheap. It's even possible that this result is automatically calculated by the processor as a byproduct of the binary AND and stored in a CPU flag. If so, then the comparison with zero might get optimized out entirely leaving you with just one bitwise AND (depends on the CPU, compiler and the rest of your code though).
Last but not least, if you're just using this for IF statements then maybe you don't really need to coerce it to 1 and 0? I mean, in C anything non-0 is truthy, so this:
if (x & MASK)
would work just fine and produce optimal code.

Is there a better way to define a preprocessor macro for doing bit manipulation?

Take macro:
GPIOxMODE(gpio,mode,port) ( GPIO##gpio->MODER = ((GPIO##gpio->MODER & ~((uint32_t)GPIO2BITMASK << (port*2))) | (mode << (port * 2))) )
Assuming that the reset value of the register is 0xFFFF.FFFF, I want to set a 2 bit width to an arbitrary value. This was written for an STM32
MCU that has 15 pins per port. GPIO2BITMASK is defined as 0x3. Is there a better way for clearing and setting a random 2 bits in anywhere in the
32-bit wide register.
Valid range for port 0 - 15
Valid range for mode 0 - 3
The method I came up with is to bit shift the mask, invert it, logically AND it with the existing register value, logically OR the result with a bit shifted new value.
I am looking to combine the mask and new value to reduce the number of logical operations bit shift operations. The goal is also keep the process generic enough so that I can use for bit operations of 1,2,3 or 4 bit widths.
Is there a better way?
In the long and sort of it, is there a better way is really an opened question. I am looking specifically for a method that will reduce the number of logical operations and bit shift operations, while being a simple one lined statement.
The answer is NO.
You MUST do reset/set to ensure that the bit field you are writing to has the desired value.
The answers received can be better (in a matter of opinion/preference/philosophy/practice) in that they aren't necessary a macros and have have parameter checking. Also pit falls of this style have been pointed out in both the comments and responses.
This kind of macros should be avoided as a plaque for many reasons:
They are not debuggable
They are hard to find error prone
and many other reasons
The same result you can archive using inline functions. The resulting code will be the same effective
static inline __attribute__((always_inline)) void GPIOMODE(GPIO_TypeDef *gpio, unsigned mode, unsigned pin)
{
gpio -> MODER &= ~(GPIO_MODER_MODE0_Msk << (pin * 2));
gpio -> MODER |= mode << (pin * 2);
}
but if you love macros
#define GPIOxMODE(gpio,mode,port) {volatile uint32_t *mdr = &GPIO##gpio->MODER; *mdr &= ~(GPIO_MODER_MODE0_Msk << (port*2)); *mdr |= mode << (port * 2);}
I am looking to combine the mask and new value to reduce the number of
logical operations bit shift operations.
you cant. You need to reset and then set the bits.
The method I came up with is to bit shift the mask, invert it,
logically AND it with the existing register value, logically OR the
result with a bit shifted new value.
That or an equivalent is the way to do it.
I am looking to combine the mask and new value to reduce the number of
logical operations bit shift operations. The goal is also keep the
process generic enough so that I can use for bit operations of 1,2,3
or 4 bit widths.
Is there a better way?
You must accomplish two basic objectives:
ensure that the bits that should be off in the affected range are in fact off, and
ensure that the bits that should be on in the affected range are in fact on.
In the general case, those require two separate operations: a bitwise AND to force bits off, and a bitwise OR (or XOR, if the bits are first cleared) to turn the wanted bits on. There may be ways to shortcut for specific cases of original and target values, but if you want something general-purpose, as you say, then your options are limited.
Personally, though, I think I would be inclined to build it from multiple pieces, separating the GPIO selection from the actual computation. At minimum, you can separate out a generic macro for setting a range of bits:
#define SETBITS32(x,bits,offset,mask) ((((uint32_t)(x)) & ~(((uint32_t)(mask)) << (offset))) | (((uint32_t)(bits)) << (offset)))
#define GPIOxMODE(gpio,mode,port) (GPIO##gpio->MODER = SETBITS32(GPIO##gpio->MODER, mode, port * 2, GPIO2BITMASK)
But do note that there appears to be no good way to avoid such a macro evaluating some of its arguments more than once. It might therefore be safer to write SETBITS32 as a function instead. The compiler will probably inline such a function in any case, but you can maximize the likelihood of that by declaring it static and inline:
static inline uint32_t SETBITS32(uint32_t x, uint32_t bits, unsigned offset, uint32_t mask) {
return x & ~(mask << offset) | (bits << offset);
}
That's easier to read, too, though it, like the macro, does assume that bits has no set bits outside the mask region.
Of course there are other, similar formulations. For instance, if you do not need to support discontinuous bit ranges, you might specify a bit count instead of a bit mask. This alternative does that, protects against the user providing bits outside the specified range, and also has some parameter validation:
static inline uint32_t set_bitrange_32(uint32_t x, uint32_t bits, unsigned width,
unsigned offset) {
if (width + offset > 32) {
// error: invalid parameters
return x;
} else if (width == 0) {
return x;
}
uint32_t mask = ~(uint32_t)0 >> (32 - width);
return x & ~(mask << offset) | ((bits & mask) << offset);
}

Is using the most significant bit to tag a union considered a bad practice?

Suppose I have the following tagged union:
// f32 is a float of 32 bits
// uint32 is an unsigned int of 32 bits
struct f32_or_uint32 {
char tag;
union {
f32 f;
uint32 u;
}
}
If tag == 0, then it is a f32. If tag == 1, then it is a uint32. There is only one problem with that representation: it uses 64 bits, when only 33 should be necessary. That is almost a ´1/2´ waste, which can be considerably when you are dealing with huge buffers. I never use the 32 bits, so I thought in using one bit as the flag and doing this instead:
#define IS_UINT32(x) (!(x&0x80000000))
#define IS_F323(x) (x&0x80000000)
#define MAKE_F32(x) (x|0x80000000)
#define EXTRACT_F32(x) (x&0x7FFFFFF)
union f32_or_uint32 {
f32 f;
uint32 u;
}
This way, I am using 31 bits for the value and only 1 for the tag. My question is: could this practice be detrimental to performance, maintainability and portability?
No, you can't do that. At least, not in the general sense.
An unsigned integer takes on 2^32 different values. It uses all 32 bits. Likewise, a float takes on (nearly) 2^32 different values. It uses all 32 bits.
With some care it might well be possible to isolate a bit that will always be 1 in one type and 0 for the other, across the range of values that you actually want to use. The high bit of unsigned int would be available if you decided to use values only up to 2^31. The low bit of float could be available if you didn't mind a small rounding error.
There is a better strategy available if the range of unsigned ints is smaller (say only 23 bits). You could select a high order bit pattern of 1+8 bits that was illegal for your usage of float. Perhaps you can manage without +/- infinity? Try 0x1ff.
To answer your other questions, it's relatively easy to create a new type like this in C++, using a class and some inline functions, and get good performance. Doing it with macros in C would tend to be more invasive of the code and more prone to bugs, but with similar performance. The instruction overhead required to do these tests and perhaps do some mask operations is unlikely to be detectable in most normal usages. Obviously that would have to be reconsidered in the case of a computationally intensive usage, but you can just see this as a typical space/speed trade-off.
Let's talk first about whether this works conceptually. This trick more or less works if you're storing unsigned 32-bit numbers but you know they will never be greater than 231. It works because all numbers smaller than 231 will always have a "0" in the high bit. If you know it will always be 0, you don't actually have to store it.
The trick also more or less works if you are storing floating point numbers that are never negative. For single-precision floating point numbers, the high bit indicates sign, and is always 0 if the number is positive. (This property of floating-point numbers is not nearly as well-known among programmers, so you'd want to document this).
So assuming your use case fits in these parameters, the approach works conceptually. Now let's investigate whether it is possible to express in C.
You can't perform bitwise operations on floating-point values; for more info see [Why you can't] perform a bitwise operation on floating point numbers. So to get at the floating-point number's bit pattern, you need to treat it as a char* array:
typedef uint32_t tagged_t;
tagged_t float_to_tagged(float f) {
uint32_t ret;
memcpy(&ret, &f, sizeof(f));
// Make sure the user didn't pass us a negative number.
assert((ret & 0x80000000) == 0);
return ret | 0x80000000
}
Don't worry about that memcpy() call -- any compiler worth it's salt will optimize it away. This is the best and fastest way to get at the float's underlying bit pattern.
And you'd likewise need to use memcpy to get the original float back.
float tagged_to_float(tagged_t val) {
float ret;
val &= 0x7FFFFFF;
memcpy(&ret, &val, sizeof(val));
return ret;
}
I have answered your question directly because I believe in giving people the facts. That said, I agree with other posters who say this is unlikely to be your best design choice. Reflect on your use case: if you have very large buffers of these values, is it really the case that every single one can be either a uint32 or a float, and there is no pattern to it? If you can move this type information to a higher level, where the type info applies to all values in some part of the buffer, it will most definitely be more efficient than making your loops test the type of every value individually.
Using the high bit is going to be annoying on the most diffuse x86 platform because it's the sign bit and the most significant bit for unsigned ints.
A scheme that's IMO slightly better is to use the lowest bit instead but that requires decoding (i.e. storing a shifted integer):
#include <stdio.h>
typedef union tag_uifp {
unsigned int ui32;
float fp32;
} uifp;
#define FLOAT_VALUE 0x00
#define UINT_VALUE 0x01
int get_type(uifp x) {
return x.ui32 & 1;
}
unsigned get_uiv(uifp x) {
return x.ui32 >> 1;
}
float get_fpv(uifp x) {
return x.fp32;
}
uifp make_uiv(unsigned x) {
uifp result;
result.ui32 = 1 + (x << 1);
return result;
}
uifp make_fpv(float x) {
uifp result;
result.fp32 = x;
result.ui32 &= ~1;
return result;
}
uifp data[10];
void setNumbers() {
int i;
for (i=0; i<10; i++) {
data[i] = (i & 1) ? make_fpv(i/10.0) : make_uiv(i);
}
}
void printNumbers() {
int i;
for (i=0; i<10; i++) {
if (get_type(data[i]) == FLOAT_VALUE) {
printf("%0.3f\n", get_fpv(data[i]));
} else {
printf("%i\n", get_uiv(data[i]));
}
data[i] = (i & 1) ? make_fpv(i) : make_uiv(i);
}
}
int main(int argc, const char *argv[]) {
setNumbers();
printNumbers();
return 0;
}
With this approach what you are losing is the least significant bit of precision from the float number (i.e. storing a float value and re-reading it is going to lose some accuracy) and only 31 bits are available for the integer.
You could try instead to use only NaNs floating point values, but this means that only 22 bits are easily available for the integers because of the float format (23 if you're willing to lose also infinity).
The idea of using lowest bits for tagging is used often (e.g. Lisp implementations).

Applications of bitwise operators in C and their efficiency? [duplicate]

This question already has answers here:
Real world use cases of bitwise operators [closed]
(41 answers)
Closed 6 years ago.
I am new to bitwise operators.
I understand how the logic functions work to get the final result. For example, when you bitwise AND two numbers, the final result is going to be the AND of those two numbers (1 & 0 = 0; 1 & 1 = 1; 0 & 0 = 0). Same with OR, XOR, and NOT.
What I don't understand is their application. I tried looking everywhere and most of them just explain how bitwise operations work. Of all the bitwise operators I only understand the application of shift operators (multiplication and division). I also came across masking. I understand that masking is done using bitwise AND but what exactly is its purpose and where and how can I use it?
Can you elaborate on how I can use masking? Are there similar uses for OR and XOR?
The low-level use case for the bitwise operators is to perform base 2 math. There is the well known trick to test if a number is a power of 2:
if ((x & (x - 1)) == 0) {
printf("%d is a power of 2\n", x);
}
But, it can also serve a higher level function: set manipulation. You can think of a collection of bits as a set. To explain, let each bit in a byte to represent 8 distinct items, say the planets in our solar system (Pluto is no longer considered a planet, so 8 bits are enough!):
#define Mercury (1 << 0)
#define Venus (1 << 1)
#define Earth (1 << 2)
#define Mars (1 << 3)
#define Jupiter (1 << 4)
#define Saturn (1 << 5)
#define Uranus (1 << 6)
#define Neptune (1 << 7)
Then, we can form a collection of planets (a subset) like using |:
unsigned char Giants = (Jupiter|Saturn|Uranus|Neptune);
unsigned char Visited = (Venus|Earth|Mars);
unsigned char BeyondTheBelt = (Jupiter|Saturn|Uranus|Neptune);
unsigned char All = (Mercury|Venus|Earth|Mars|Jupiter|Saturn|Uranus|Neptune);
Now, you can use a & to test if two sets have an intersection:
if (Visited & Giants) {
puts("we might be giants");
}
The ^ operation is often used to see what is different between two sets (the union of the sets minus their intersection):
if (Giants ^ BeyondTheBelt) {
puts("there are non-giants out there");
}
So, think of | as union, & as intersection, and ^ as union minus the intersection.
Once you buy into the idea of bits representing a set, then the bitwise operations are naturally there to help manipulate those sets.
One application of bitwise ANDs is checking if a single bit is set in a byte. This is useful in networked communication, where protocol headers attempt to pack as much information into the smallest area as is possible in an effort to reduce overhead.
For example, the IPv4 header utilizes the first 3 bits of the 6th byte to tell whether the given IP packet can be fragmented, and if so whether to expect more fragments of the given packet to follow. If these fields were the size of ints (1 byte) instead, each IP packet would be 21 bits larger than necessary. This translates to a huge amount of unnecessary data through the internet every day.
To retrieve these 3 bits, a bitwise AND could be used along side a bit mask to determine if they are set.
char mymask = 0x80;
if(mymask & (ipheader + 48) == mymask)
//the second bit of the 6th byte of the ip header is set
Small sets, as has been mentioned. You can do a surprisingly large number of operations quickly, intersection and union and (symmetric) difference are obviously trivial, but for example you can also efficiently:
get the lowest item in the set with x & -x
remove the lowest item from the set with x & (x - 1)
add all items smaller than the smallest present item
add all items higher than the smallest present item
calculate their cardinality (though the algorithm is nontrivial)
permute the set in some ways, that is, change the indexes of the items (not all permutations are equally efficient)
calculate the lexicographically next set that contains as many items (Gosper's Hack)
1 and 2 and their variations can be used to build efficient graph algorithms on small graphs, for example see algorithm R in The Art of Computer Programming 4A.
Other applications of bitwise operations include, but are not limited to,
Bitboards, important in many board games. Chess without bitboards is like Christmas without Santa. Not only is it a space-efficient representation, you can do non-trivial computations directly with the bitboard (see Hyperbola Quintessence)
sideways heaps, and their application in finding the Nearest Common Ancestor and computing Range Minimum Queries.
efficient cycle-detection (Gosper's Loop Detection, found in HAKMEM)
adding offsets to Z-curve addresses without deconstructing and reconstructing them (see Tesseral Arithmetic)
These uses are more powerful, but also advanced, rare, and very specific. They show, however, that bitwise operations are not just a cute toy left over from the old low-level days.
Example 1
If you have 10 booleans that "work together" you can do simplify your code a lot.
int B1 = 0x01;
int B2 = 0x02;
int B10 = 0x0A;
int someValue = get_a_value_from_somewhere();
if (someValue & (B1 + B10)) {
// B1 and B10 are set
}
Example 2
Interfacing with hardware. An address on the hardware may need bit level access to control the interface. e.g. an overflow bit on a buffer or a status byte that can tell you the status of 8 different things. Using bit masking you can get down the the actual bit of info you need.
if (register & 0x80) {
// top bit in the byte is set which may have special meaning.
}
This is really just a specialized case of example 1.
Bitwise operators are particularly useful in systems with limited resources as each bit can encode a boolean. Using many chars for flags is wasteful as each takes one byte of space (when they could be storing 8 flags each).
Commonly microcontrollers have C interfaces for their IO ports in which each bit controls 1 of 8 ports. Without bitwise operators these would be quite difficult to control.
Regarding masking, it is common to use both & and |:
x & 0x0F //ensures the 4 high bits are 0
x | 0x0F //ensures the 4 low bits are 1
In microcontroller applications, you can utilize bitwise to switch between ports. In the below picture, if we would like to turn on a single port while turning off the rest, then the following code can be used.
void main()
{
unsigned char ON = 1;
TRISB=0;
PORTB=0;
while(1){
PORTB = ON;
delay_ms(200);
ON = ON << 1;
if(ON == 0) ON=1;
}
}

C: how to build up a binary integer

I have some logic that I would like to store as an integer. I have 30 "positions" that can be either yes or no and I would like to represent this as an integer. As I am looping through these positions what would be the easiest way to store this information as an integer?
You can use a 32 bit uint:
uint32_t flags = 0;
flags |= UINT32_C(1) << x; // set x'th bit from right
flags &= ~(UINT32_C(1) << x); // unset x'th bit from right
if (flags & UINT32_C(1) << x) // test x'th bit from right
struct{
int flag0:1;
int flag1:1;
...
int flag31:1;
} myFlags;
Using :x in definition of an integer struct member means bitfield with x bits assigned.
You can access each struct member as usual, but the values can only be according to the size in bits (in my example - either 1 or 0 because only 1 bit is available), and the compiler will enforce it. The struct will be (probably, depends on the compiler settings) packed to a total size of integers needed to represent the total bits.
Another option would be using a int and bitwise operators & and | to access specific bits. In this case you have to make sure yourself that setting one bit won't affect another, and that there are no overflows etc.
#define POSITION_A 1
#define POSITION_B 2
unsigned int position = 0;
// set a position
position |= POSITION_A;
// clear a position
position &= = ~(POSITION_A);
Yes, as WTP's comment, you could save all your data in one unsigned int (uint32_t), and access it with AND(&), OR(|), NOT(~).
If saving storage is not a primary concern, however, I recommend not to use this compact technique.
You may need to expand your code to support more than 2 types(yes/no) of answers such as (yes/no/maybe).
You may have more than 30 questions which does not fit into one unsigned int.
If I were you, I'll use some array/list of small int (short or char) to store the values. It's somewhat waste of storage, but much easier to read, and much easier to add more features.

Resources