C : How do I change bits in specific positions to specific values? - c

I tried finding this question, but all the other questions don't relate to my problem.
My issue: I have something like 0xFreeFoodU where I have to get specific positions and either flip them or make them 1s or 0s.
So for example, bits in position 2, 6, 10, 14, 18, 22 , 26, and 30 should be unchanged, whereas the bits in positions 3, 7, 11, 15, 19, 23, 27, and 31 should be changed to 1. I don't wanna post my entire prompt bc I don't wanna cheat and get someone else to do my hw for me. But giving me an answer to at least one of these will help millions.
This is bit manipulation. But I have no clue how to manipulate specific bits in specific positions. :(
EDIT I cant upload a full program; its too long. But I have a main function where I call the function I need. The function should just ideally have return and so-on; so far I have
return val_num ^ 0x22222222U;
But I should be adding to it. I only want help with how to set certain bits to 1 and 0. Is masking required?

If you managed to do the XOR part, the part setting bits to 1 is nearly the same:
Create a mask with all bits:
unsigned int mask_0 = 0x....; // bits that shall be set to 0
unsigned int mask_1 = 0x88888888; // bits that shall be set to 1
// Note: this is 1<<3 | 1<<7 | 1<<11 ... All bits set that shall be changed
unsigned int mask_toggle = 0x22222222; // bits that shall be toggled
unsigned int value = 0xdeadbeef;
Perform required operation:
// toggle bits
value = value ^ mask_toggle;
// set bits to 1
value = value | mask_1;
// set bits to 0
value = value & ~mask_0;

Use the bitwise OR operator (|) to set a bit.
number |= (1 << bitposition);
Use the bitwise AND operator (&) to clear a bit.
number &= ~(1 << bitposition);
The XOR operator (^) can be used to toggle a bit.
number ^= (1 << bitposition);
This is the program
#include<stdio.h>
void main()
{
int a[] = { 3, 7, 11, 15, 19, 23, 27, 31},b = 1200,i;
for(i=0;i<8;i++)
b ^= (1 << a[i]);
printf("%d\n",b);
}

Related

How to efficiently count leading zeros in a 24 bit unsigned integer?

Most of the clz() (SW impl.) are optimized for 32 bit unsigned integer.
How to efficiently count leading zeros in a 24 bit unsigned integer?
UPD. Target's characteristics:
CHAR_BIT 24
sizeof(int) 1
sizeof(long int) 2
sizeof(long long int) 3
TL;DR: See point 4 below for the C program.
Assuming your hypothetical target machine is capable of correctly implementing unsigned 24-bit multiplication (which must return the low-order 24 bits of the product), you can use the same trick as is shown in the answer you link. (But you might not want to. See [Note 1].) It's worth trying to understand what's going on in the linked answer.
The input is reduced to a small set of values, where all integers with the same number of leading zeros map to the same value. The simple way of doing that is to flood every bit to cover all the bit positions to the right of it:
x |= x>>1;
x |= x>>2;
x |= x>>4;
x |= x>>8;
x |= x>>16;
That will work for 17 up to 32 bits; if your target datatype has 9 to 16 bits, you could leave off the last shift-and-or because there is no bit position 16 bits to the right of any bit. And so on. But with 24 bits, you'll want all five shift-and-or.
With that, you've turned x into one of 25 values (for 24-bit ints):
x clz x clz x clz x clz x clz
-------- --- -------- --- -------- --- -------- --- -------- ---
0x000000 24 0x00001f 19 0x0003ff 14 0x007fff 9 0x0fffff 4
0x000001 23 0x00003f 18 0x0007ff 13 0x00ffff 8 0x1fffff 3
0x000003 22 0x00007f 17 0x000fff 12 0x01ffff 7 0x3fffff 2
0x000007 21 0x0000ff 16 0x001fff 11 0x03ffff 6 0x7fffff 1
0x00000f 20 0x0001ff 15 0x003fff 10 0x07ffff 5 0xffffff 0
Now, to turn x into clz, we need a good hash function. We don't necessarily expect that hash(x)==clz, but we want the 25 possible x values to hash to different numbers, ideally in a small range. As with the link you provide, the hash function we'll choose is to multiply by a carefully-chosen multiplicand and then mask off a few bits. Using a mask means that we need to choose five bits; in theory, we could use a 5-bit mask anywhere in the 24-bit word, but in order to not have to think too much, I just chose the five high-order bits, the same as the 32-bit solution. Unlike the 32-bit solution, I didn't bother adding 1, and I expect to distinct values for all 25 possible inputs. The equivalent isn't possible with a five-bit mask and 33 possible clz values (as in the 32-bit case), so they have to jump through an additional hoop if the original input was 0.
Since the hash function doesn't directly produce the clz value, but rather a number between 0 and 31, we need to translate the result to a clz value, which uses a 32-byte lookup table, called debruijn in the 32-bit algorithm for reasons I'm not going to get into.
An interesting question is how to select a multiplier with the desired characteristics. One possibility would be to do a bunch of number theory to elegantly discover a solution. That's how it was done decades ago, but these days I can just write a quick-and-dirty Python program to do a brute force search over all the possible multipliers. After all, in the 24-bit case there are only about 16 million possibilities and lots of them work. The actual Python code I used is:
# Compute the 25 target values
targ=[2**i - 1 for i in range(25)]
# For each possible multiplier, compute all 25 hashes, and see if they
# are all different (that is, the set of results has size 25):
next(i for i in range(2**19, 2**24)
if len(targ)==len(set(((i * t) >> 19) & 0x1f
for t in targ)))
Calling next on a generator expression returns the first generated value, which in this case is 0x8CB4F, or 576335. Since the search starts at 0x80000 (which is the smallest multiplier for which hash(1) is not 0), the result printed instantly. I then spent a few more milliseconds to generate all the possible multipliers between 219 and 220, of which there are 90, and selected 0xCAE8F (831119) for purely personal aesthetic reasons.
The last step is to create the lookup table from the computed hash function. (Not saying this is good Python. I just took it from my command history; I might come back and clean it up later. But I included it for completeness.):
lut = dict((i,-1) for i in range(32))
lut.update((((v * 0xcae8f) >> 19) & 0x1f, 24 - i)
for i, v in enumerate(targ))
print(" static const char lut[] = {\n " +
",\n ".join(', '.join(f"{lut[i]:2}" for i in range(j, j+8))
for j in range(0, 32, 8)) +
"\n };\n")
# The result is pasted into the C code below.
So then it's just a question of assembling the C code:
// Assumes that `unsigned int` has 24 value bits.
int clz(unsigned x) {
static const char lut[] = {
24, 23, 7, 18, 22, 6, -1, 9,
-1, 17, 15, 21, 13, 5, 1, -1,
8, 19, 10, -1, 16, 14, 2, 20,
11, -1, 3, 12, 4, -1, 0, -1
};
x |= x>>1;
x |= x>>2;
x |= x>>4;
x |= x>>8;
x |= x>>16;
return lut[((x * 0xcae8f) >> 19) & 0x1f];
}
The test code calls clz on every 24-bit integer in turn. Since I don't have a 24-bit machine handy, I just assume that the arithmetic will work the same on the hypothetical 24-bit machine in the OP.
#include <stdio.h>
# For each 24-bit integer in turn (from 0 to 2**24-1), if
# clz(i) is different from clz(i-1), print clz(i) and i.
#
# Expected output is 0 and the powers of 2 up to 2**23, with
# descending clz values from 24 to 0.
int main(void) {
int prev = -1;
for (unsigned i = 0; i < 1<<24; ++i) {
int pfxlen = clz(i);
if (pfxlen != prev) {
printf("%2d 0x%06X\n", pfxlen, i);
prev = pfxlen;
}
}
return 0;
}
Notes:
If the target machine does not implement 24-bit unsigned multiply in hardware --i.e., it depends on a software emulation-- then it's almost certainly faster to do the clz by just looping over initial bits, particularly if you fold the loop by scanning several bits at a time with a lookup table. That might be faster even if the machine does do efficient hardware multiplies. For example, you can scan six bits at a time with a 32-entry table:
// Assumes that `unsigned int` has 24 value bits.
int clz(unsigned int x) {
static const char lut[] = {
5, 4, 3, 3, 2, 2, 2, 2,
1, 1, 1, 1, 1, 1, 1, 1,
0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0
};
/* Six bits at a time makes octal easier */
if (x & 077000000u) return lut[x >> 19];
if (x & 0770000u) return lut[x >> 13] + 6;
if (x & 07700u) return lut[x >> 7] + 12;
if (x ) return lut[x >> 1] + 18;
return 24;
}
That table could be reduced to 48 bits but the extra code would likely eat up the savings.
A couple of clarifications seem to be in order here. First, although we're scanning six bits at a time, we only use five of them to index the table. That's because we've previously verified that the six bits in question are not all zero; in that case, the low-order bit is either irrelevant (if some other bit is set) or it's 1. Also, we get the table index by shifting without masking; the masking is unnecessary because we know from the masked tests that all the higher order bits are 0. (This will, however, fail miserably if x has more than 24 bits.)
Convert the 24 bit integer into a 32 bit one (either by type punning or explicitly shuffling around the bits), then to the 32 bit clz, and subtract 8.
Why do it that way? Because in this day and age you'll be hard pressed to find a machine that deals with 24 bit types, natively, in the first place.
I would look for the builtin function or intrinsic available for your platform and compiler. Those functions usually implement the most efficient way of finding the most significant bit number. For example, gcc has __builtin_clz function.
If the 24 bit integer is stored in a byte array (for example received from sensor)
#define BITS(x) (CHAR_BIT * sizeof(x) - 24)
int unaligned24clz(const void * restrict val)
{
unsigned u = 0;
memcpy(&u, val, 3);
#if defined(__GNUC__)
return __builtin_clz(u) - BITS(u);
#elif defined(__ICCARM__)
return __CLZ(u) - BITS(u);
#elif defined(__arm__)
return __clz(u) - BITS(u);
#else
return clz(u) - BITS(u); //portable version using standard C features
#endif
}
If it is stored in valid integer
int clz24(const unsigned u)
{
#if defined(__GNUC__)
return __builtin_clz(u) - BITS(u);
#elif defined(__ICCARM__)
return __CLZ(u) - BITS(u);
#elif defined(__arm__)
return __clz(u) - BITS(u);
#else
return clz(u) - BITS(u); //portable version using standard C features
#endif
}
https://godbolt.org/z/z6n1rKjba
You can add more compilers support if you need.
Remember if the value is 0 the value of the __builtin_clz is undefined so you will need to add another check.

Fast bit concatenation reversing one of arguments

I need to merge two variables into one like in example below:
x = 0b110101
y = 0b10011
merged(x,y) -> 0b11010111001
merged(y,x) -> 0b10011101011
So, y is reversed and concatenated to x without unnecessary zeros (last bit of result is always 1)
In other words: merged(abcdefg, hijklm) results in abcdefgmlkjih where a and h are 1
What would be the most efficient way to do this in C
EDIT: most efficient = fastest, sizeof(x) = sizeof(y) = 1, CHAR_BIT= 8
EDIT2: Since question has been put on hold I will post summary right here:
Further limitations and more detail:
Target language: C with 64-bit operations support
For this exact problem fastest method would be a 256 element lookup table as was suggested in comments, which returns index of high bit N and reversed value of second argument, so we can shift first argument to the left by N and perform bitwise or with reversed value of second argument.
In case someone needs good performance for arguments larger than 8 bit (lookup table is not an option) they could:
Find index of high bit using de Bruijn algorithm. Example for 32 bits:
uint32_t msbDeBruijn32( uint32_t v )
{
static const int MultiplyDeBruijnBitPosition[32] =
{
0, 9, 1, 10, 13, 21, 2, 29, 11, 14, 16, 18, 22, 25, 3, 30,
8, 12, 20, 28, 15, 17, 24, 7, 19, 27, 23, 6, 26, 5, 4, 31
};
v |= v >> 1; // first round down to one less than a power of 2
v |= v >> 2;
v |= v >> 4;
v |= v >> 8;
v |= v >> 16;
return MultiplyDeBruijnBitPosition[( uint32_t )( v * 0x07C4ACDDU ) >> 27];
}
taken from this question:
Find most significant bit (left-most) that is set in a bit array
Reverse bits in bytes (one by one) of the second argument using this bit twiddling hack
unsigned char b; // reverse this (8-bit) byte
b = (b * 0x0202020202ULL & 0x010884422010ULL) % 1023;
taken from here: http://graphics.stanford.edu/~seander/bithacks.html#BitReverseObvious
So, this question now can be closed/deleted
Solution
This should do the trick in C++ and C:
unsigned int merged(unsigned char a, unsigned char b) {
unsigned int r = a;
for (; b; b>>=1)
r = (r<<1)|(b&1);
return r;
}
I don't think that there's a more efficient way to do it. Here an online demo.
How does it work ?
This works by building the result by starting with the first argument:
r is 0b110101;
Then it left-shifts r, which is, using your terminology, like concatenating a 0 at the lowest end of r:
0b110101 << 1 is 0b1101010;
and simultaneously we get the lowest bit of b with a bitwise and, then set the lowest bit of r to the same value using a bitwise or:
0b10011 & 1 is 0b00001
so (0b110101 << 1)|(0b10011 & 1) is 0b1101011;
We then right-shift b to process the next lowest bit in the same way:
0b10011 >> 1 is 0b1001;
as long as there is some bit set left in b.
Important remarks
You have to be careful about the type used in order to avoid the risk of overflow. So for 2 unsigned char as input you could use unsigned int as output.
Note that using signed integers would be risky, if the left-shift might cause the sign bit to change. This is why unsigned is important here.

Changing more than one bit in a register

I have an 8 bit register and I want to change bits 4,5 and 6 without altering the other bits.
Those bit can take values from 000 to 111 (regardless their previous state).
Is there a method to change them in one step or must I change them individually?
You need a mask to put the requested bits in a known state, 0 is the more convenient as per my programming habits, then set the bits that you want to 1 with an or operation and write back:
#define mask 0x70 // 01110000b bit 4, 5 & 6 set
reg = (reg & ~mask) | (newVal & mask);
We use the inverted mask to set to 0 the bits to change and the unchanged mask to set to 0 the bits that we don't want to interfere from the new value.
If you are sure that the unwanted bits of the new value are always 0 you can simplify:
#define mask 0x8f // 10001111b bit 4, 5 & 6 reset
reg = (reg & mask) | newVal; //newVal must have always bits 7, 3, 2, 1 & 0 reset.
You can do it by bitwise operation, i.e. first clear the 3 bits, and then set them:
unsigned char value = 0x70;
unsigned char r = 0xFF;
r = (r & 0x8F) | value;
You can use bit-field inside a struct:
typedef struct{
unsigned char b0_3 : 4;
unsigned char b4_6 : 3;
unsigned char b7 : 1;
}your_reg_type;
your_reg_type my_register;
//modify only the bits you need
my_register.b4_6 = 0x02;
Check out how your compiler orders the bits inside the struct before trying and order your bit-field accordingly
A number of solutions and variations are possible (and have been suggested already), but if the value of the three consecutive bits has meaning in-itself (i.e. it is a value 0 to 7 rather then simply a collection of independent flag or control bits for example), it may be useful to keep the value as a simple numeric range 0 to 7 rather then directly encoding details of bit position within the value. In which case:
assert( val <= 7 ) ; // During debug (when NDEBUG not defined) the
// assert trap will catch out-of-range inputs
reg = (reg & mask) | (val << 4) ;
Of course there is some small cost to pay for simplifying the interface in this way (by adding a shift operation), but the advantage is that knowledge of the details of the register field layout is restricted to one place.

How to get position of right most set bit in C

int a = 12;
for eg: binary of 12 is 1100 so answer should be 3 as 3rd bit from right is set.
I want the position of the last most set bit of a. Can anyone tell me how can I do so.
NOTE : I want position only, here I don't want to set or reset the bit. So it is not duplicate of any question on stackoverflow.
This answer Unset the rightmost set bit tells both how to get and unset rightmost set bit for an unsigned integer or signed integer represented as two's complement.
get rightmost set bit,
x & -x
// or
x & (~x + 1)
unset rightmost set bit,
x &= x - 1
// or
x -= x & -x // rhs is rightmost set bit
why it works
x: leading bits 1 all 0
~x: reversed leading bits 0 all 1
~x + 1 or -x: reversed leading bits 1 all 0
x & -x: all 0 1 all 0
eg, let x = 112, and choose 8-bit for simplicity, though the idea is same for all size of integer.
// example for get rightmost set bit
x: 01110000
~x: 10001111
-x or ~x + 1: 10010000
x & -x: 00010000
// example for unset rightmost set bit
x: 01110000
x-1: 01101111
x & (x-1): 01100000
Finding the (0-based) index of the least significant set bit is equivalent to counting how many trailing zeros a given integer has. Depending on your compiler there are builtin functions for this, for example gcc and clang support __builtin_ctz.
For MSVC you would need to implement your own version, this answer to a different question shows a solution making use of MSVC intrinsics.
Given that you are looking for the 1-based index, you simply need to add 1 to ctz's result in order to achieve what you want.
int a = 12;
int least_bit = __builtin_ctz(a) + 1; // least_bit = 3
Note that this operation is undefined if a == 0. Furthermore there exist __builtin_ctzl and __builtin_ctzll which you should use if you are working with long and long long instead of int.
One can use the property of 2s-complement here.
Fastest way to find 2s-complement of a number is to get the rightmost set bit and flip everything to the left of it.
For example: consider a 4 bit system
/* Number in binary */
4 = 0100
/* 2s complement of 4 */
complement = 1100
/* which nothing but */
complement == -4
/* Result */
4 & (-4) = 0100
Notice that there is only one set bit and its at rightmost set bit of 4.
Similarly we can generalise this for n.
n&(-n) will contain only one set bit which is actually at the rightmost set bit position of n.
Since there is only one set bit in n&(-n), it is a power of 2.
So finally we can get the bit position by:
log2(n&(-n))+1
The leftmost bit of n can be obtained using the formulae:
n & ~(n-1)
This works because when you calculate (n-1) .. you are actually making all the zeros till the rightmost bit to 1, and the rightmost bit to 0.
Then you take a NOT of it .. which leaves you with the following:
x= ~(bits from the original number) + (rightmost 1 bit) + trailing zeros
Now, if you do (n & x), you get what you need, as the only bit that is 1 in both n and x is the rightmost bit.
Phewwwww .. :sweat_smile:
http://www.catonmat.net/blog/low-level-bit-hacks-you-absolutely-must-know/
helped me understand this.
There is a neat trick in Knuth 7.1.3 where you multiply by a "magic" number (found by a brute-force search) that maps the first few bits of the number to a unique value for each position of the rightmost bit, and then you can use a small lookup table. Here is an implementation of that trick for 32-bit values, adapted from the nlopt library (MIT/expat licensed).
/* Return position (0, 1, ...) of rightmost (least-significant) one bit in n.
*
* This code uses a 32-bit version of algorithm to find the rightmost
* one bit in Knuth, _The Art of Computer Programming_, volume 4A
* (draft fascicle), section 7.1.3, "Bitwise tricks and
* techniques."
*
* Assumes n has a 1 bit, i.e. n != 0
*
*/
static unsigned rightone32(uint32_t n)
{
const uint32_t a = 0x05f66a47; /* magic number, found by brute force */
static const unsigned decode[32] = { 0, 1, 2, 26, 23, 3, 15, 27, 24, 21, 19, 4, 12, 16, 28, 6, 31, 25, 22, 14, 20, 18, 11, 5, 30, 13, 17, 10, 29, 9, 8, 7 };
n = a * (n & (-n));
return decode[n >> 27];
}
Try this
int set_bit = n ^ (n&(n-1));
Explanation:
As noted in this answer, n&(n-1) unsets the last set bit.
So, if we unset the last set bit and xor it with the number; by the nature of the xor operation, the last set bit will become 1 and the rest of the bits will return 0
1- Subtract 1 form number: (a-1)
2- Take it's negation : ~(a-1)
3- Take 'AND' operation with original number:
int last_set_bit = a & ~(a-1)
The reason behind subtraction is, when you take negation it set its last bit 1, so when take 'AND' it gives last set bit.
Check if a & 1 is 0. If so, shift right by one until it's not zero. The number of times you shift is how many bits from the right is the rightmost bit that is set.
You can find the position of rightmost set bit by doing bitwise xor of n and (n&(n-1) )
int pos = n ^ (n&(n-1));
I inherited this one, with a note that it came from HAKMEM (try it out here). It works on both signed and unsigned integers, logical or arithmetic right shift. It's also pretty efficient.
#include <stdio.h>
int rightmost1(int n) {
int pos, temp;
for (pos = 0, temp = ~n & (n - 1); temp > 0; temp >>= 1, ++pos);
return pos;
}
int main()
{
int pos = rightmost1(16);
printf("%d", pos);
}
You must check all 32 bits starting at index 0 and working your way to the left. If you can bitwise-and your a with a one bit at that position and get a non-zero value back, it means the bit is set.
#include <limits.h>
int last_set_pos(int a) {
for (int i = 0; i < sizeof a * CHAR_BIT; ++i) {
if (a & (0x1 << i)) return i;
}
return -1; // a == 0
}
On typical systems int will be 32 bits, but doing sizeof a * CHAR_BIT will get you the right number of bits in a even if it's a different size
Accourding to dbush's solution, Try this:
int rightMostSet(int a){
if (!a) return -1; //means there isn't any 1-bit
int i=0;
while(a&1==0){
i++;
a>>1;
}
return i;
}
return log2(((num-1)^num)+1);
explanation with example: 12 - 1100
num-1 = 11 = 1011
num^ (num-1) = 12^11 = 7 (111)
num^ (num-1))+1 = 8 (1000)
log2(1000) = 3 (answer).
x & ~(x-1) isolates the lowest bit that is one.
int main(int argc, char **argv)
{
int setbit;
unsigned long d;
unsigned long n1;
unsigned long n = 0xFFF7;
double nlog2 = log(2);
while(n)
{
n1 = (unsigned long)n & (unsigned long)(n -1);
d = n - n1;
n = n1;
setbit = log(d) / nlog2;
printf("Set bit: %d\n", setbit);
}
return 0;
}
And the result is as below.
Set bit: 0
Set bit: 1
Set bit: 2
Set bit: 4
Set bit: 5
Set bit: 6
Set bit: 7
Set bit: 8
Set bit: 9
Set bit: 10
Set bit: 11
Set bit: 12
Set bit: 13
Set bit: 14
Set bit: 15
Let x be your integer input.
Bitwise AND by 1.
If it's even ie 0, 0&1 returns you 0.
If it's odd ie 1, 1&1 returns you 1.
if ( (x & 1) == 0) )
{
std::cout << "The rightmost bit is 0 ie even \n";
}
else
{
std::cout<< "The rightmost bit is 1 ie odd \n";
}```
Alright, so number systems is just working with logarithms and exponents. So I'll dive down into an approach that really makes sense to me.
I would prefer you read this because I write there about how I interpret logarithms as.
When you perform the x & -x operation, it gives you the value which has the right most bit as 1 (for example, it can be 0001000 or 0000010. Now according to how I interpret logarithms as, this value of the right most set bit, is the final value after I grow at the rate of 2. Now we are interested in finding the number of digits in this answer because whatever that is, if you subtract 1 from it, that is precisely the bit-count of set bit (bit count begins with 0 here and the digit count begins with 1, so yeah). But the number of digits is precisely the time you expanded for + 1 (in accordance with my logic) or just the formula I mentioned in the previous link. But now, as we don't really need the digits, but need the bit count, and we also don't have to worry about values of bits which potentially can be real (if the number is 65) because the number is always some multiple of 2 (except 1). So if you just take the logarithm of the value x & -x, we get the bit count! I did see an answer before that mentioned this, but diving down to why it really works was something I felt like writing down.
P.S: You could also count the number of digits and then subtract 1 from it to get the bit-count.

Bitwise OR of constants

While reading some documentation here, I came across this:
unsigned unitFlags = NSYearCalendarUnit | NSMonthCalendarUnit | NSDayCalendarUnit;
I have no idea how this works. I read up on the bitwise operators in C, but I do not understand how you can fit three (or more!) constants inside one int and later being able to somehow extract them back from the int? Digging further down the documentation, I also found this, which is probably related:
typedef enum {
kCFCalendarUnitEra = (1 << 1),
kCFCalendarUnitYear = (1 << 2),
kCFCalendarUnitMonth = (1 << 3),
kCFCalendarUnitDay = (1 << 4),
kCFCalendarUnitHour = (1 << 5),
kCFCalendarUnitMinute = (1 << 6),
kCFCalendarUnitSecond = (1 << 7),
kCFCalendarUnitWeek = (1 << 8),
kCFCalendarUnitWeekday = (1 << 9),
kCFCalendarUnitWeekdayOrdinal = (1 << 10),
} CFCalendarUnit;
How do the (1 << 3) statements / variables work? I'm sorry if this is trivial, but could someone please enlighten me by either explaining or maybe posting a link to a good explanation?
Basically, the constants are represented just by one bit, so if you have a 32 bit integer, you can fit 32 constants in it. Your constants have to be powers of two, so they take only one "set" bit to represent.
For example:
#define CONSTANT_1 0x01 // 0001 in binary
#define CONSTANT_2 0x02 // 0010 in binary
#define CONSTANT_3 0x04 // 0100 in binary
then you can do
int options = CONSTANT_1 | CONSTANT_3; // will have 0101 in binary.
As you can see, each bit represents that particular constant. So you can binary AND in your code and test for the presence of each constant, like:
if (options & CONSTANT_3)
{
// CONSTANT_3 is set
}
I recommend you to read about binary operations (they work like LOGICAL operators, but at the bit level), if you grok this stuff, it will make you a bit better of a programmer.
Cheers.
If you look at a number in binary, each digit is either on (1) or off (0).
You can use bitwise operators to set or interrogate the individual bits efficiently to see if they are set or not.
Take the 8 bit value 156. In binary this is 10011100.
The set bits correspond to bits 7,4,3, and 2 (values 128, 16, 8, 4). You can compute these values with 1 << (position) rather easily. So, 1 << 7 = 128.
The number 1 is represented as 00000000000000000000000000000001
(1 << n) means shift the 1 in 1's representation n places to the left
So (1 << 3) would be 00000000000000000000000000001000
In one int you can have 32 options each of which can be turned on or off.
Option number n is on if the n'th bit is 1
1 << y is basically the same thing as 2 to the power of y
More generally, x << y is the same thing as x multiplied by 2 to the power of y.
In binary x << y means moving all the bits of x to the left by y places, adding zeroes in the place of the moved bits:
00010 << 2 = 01000
So:
1 << 1 = 2
1 << 2 = 4
1 << 3 = 8
...
<< is the shift left operator, it shifts the bits of the first operand left by the number of positions specified in the right operand (with zeros coming into the shifted positions from the right).
In your enum you end up with values that eacg have a different bit set to 1, so when you construct something like unitDate, you can later find out which flags it contains by using the & operator, e.g. unitDate & NSMonthCalendarUnit == NSMonthCalendarUnit will be true.

Resources