What does "#Define BLABLABLA (1 << 2) mean? [duplicate] - c

This question already has answers here:
What does '<<' mean in C?
(5 answers)
Closed 9 years ago.
I am reviewing some C code, and found a header file full of defines of the style:
#define BLABLABLABLA (1 << 2)
#define XXXXXXXXXXXX (1 << 3)
#define YYYYYYYYYYYY (1 << 4)
What do they mean? What do they do?

<< is the shift operator in C.
So you define BLABLABLABLABLA by a zero value with a binary 1 shifted by 2 digits to the left.
The resulting value is then :
...00000100
You would normally do this do mask things.
So, say you have one status byte where every bit is a flag.
And if the 3rd bit is set, this means BLABLABLABLABLA, you would do :
int blablaFlag = statusByte & BLABLABLABLABLA;
If this is greater 0, your flag was set.

These defines can be used when storing information (flags) in bits:
#define HAS_SOMETHING (1 << 2)
#define HAS_ANOTHER (1 << 3)
int flags = 0;
if (has_something())
flags |= HAS_SOMETHING;
if (has_another())
flags |= HAS_ANOTHER;
// ... later:
if (flags & HAS_SOMETHING)
do_something();
Using a macro preprocessor directive to set or unset these flags makes the code way more readable than this would:
if (flags & 4) // 4 is 1 lsh 2
do_something();

They are a way of defining constants, using the C preprocessor. So each time you use XXXXXXXXXXXX in your code it will be replaced with 1 << 2 by the preprocessor.

#define simply means that whenever BLABLABLABLA is seen, it's replaced with (1 << 2).
So if you write int x=BLABLABLABLA;, it's as if you wrote int x=(1 << 2);.
<< is the shift left operator.

<< and >> are shift operators, they work on a binary scale.
42 is written 42 in decimal and 101010 in binary.
When you use the operators :
The binary representation of 42 is : 101010
42 << 1 : 101010 is "shifted" to the left, becoming 1010100, thus 84.
42 >> 1 : 101010 is "shifted" to the right, becoming 10101, thus 21.
This is used for flags for readability purposes : It's easier to read 1 << 1, 1 << 2, 1 << 3 than 1, 2, 4.

Related

What is the purpose of bitshifting by 0

I am reviewing the open source AMD GPU drivers for Linux. I noticed something I haven't seen before, and I would like to know the purpose. On line 1441 of the sid.h file, there are a series of defines that are integers being bitshifted left by 0. Wouldn't this just result in the original integer being operated on?
Here is an excerpt and a link to the head
#define VGT_EVENT_INITIATOR 0xA2A4
#define SAMPLE_STREAMOUTSTATS1 (1 << 0)
#define SAMPLE_STREAMOUTSTATS2 (2 << 0)
#define SAMPLE_STREAMOUTSTATS3 (3 << 0)
https://github.com/torvalds/linux/blob/master/drivers/gpu/drm/amd/amdgpu/sid.h#L1441
Also, I am learning to access the performance counter registers of AMD GPUs in order to calculate the GPU load. Any tips on that would be appreciated as well.
Things like that could be done just for the sake of consistency (not necessarily applicable to your specific case). For example, I can describe a set of single-bit flags as
#define FLAG_1 0x01
#define FLAG_2 0x02
#define FLAG_3 0x04
#define FLAG_4 0x08
or as
#define FLAG_1 (1u << 0)
#define FLAG_2 (1u << 1)
#define FLAG_3 (1u << 2)
#define FLAG_4 (1u << 3)
In the first line of the latter approach I did not have to shift by 0. But it just looks more consistent that way and emphasizes the fact that FLAG_1 has the same nature as the rest of the flags. And 0 acts as a placeholder for a different value, if I some day decide to change it.
You can actually see exactly that in the linked code with shift by 0 in the definitions of DYN_OR_EN and DYN_RR_EN macros.
The approach can be extended to multi-bit fields within a word, like in the following (contrived) example
// Bits 0-3 - lower counter, bits 4-7 - upper counter
#define LOWER_0 (0u << 0)
#define LOWER_1 (1u << 0)
#define LOWER_2 (2u << 0)
#define LOWER_3 (3u << 0)
#define UPPER_0 (0u << 4)
#define UPPER_1 (1u << 4)
#define UPPER_2 (2u << 4)
#define UPPER_3 (3u << 4)
unsigned packed_counters = LOWER_2 + UPPER_3; /* or `LOWER_2 | UPPER_3` */
Again, shifts by 0 bits are present purely for visual consistency. As well as shifts of 0 values.
You can actually see exactly that in the linked code with shift by 0 in the definitions of LC_XMIT_N_FTS and LC_XMIT_N_FTS_MASK macros.

Most efficient way to check if flags are set in an integer

I have 11 flags defined as:
#define F1 1
#define F2 2
#define F3 3
#define F4 4
...
#define F11 11
In some function I then create an integer which can include either of those flags, for example:
int a = (1 << F1) | (1 << F5) | (1 << F11) | (1 << F8);
This then gets passed into a function which needs to decode which flags are set in order to set specific bits in specific registers. So my question is, what is the most efficient way to check which flags are set. Right now I have 11 if's like:
void foo(int a)
{
if ((a & (1 << F1)) >> F1) {
// Set bit 5 in register A.
}
if ((a & (1 << F2)) >> F2) {
// Set bit 3 in register V.
}
if ((a & (1 << F3)) >> F3) {
// Set bit 2 in register H.
}
if ((a & (1 << F4)) >> F4) {
// Set bit 1 in register V.
}
// And so on, for all 11 flags.
}
P.S.
This is for an 8-bit microcontroller.
Just use:
typedef enum{
FLAG1 = 1, // or 0x01
FLAG2 = 2,
FLAG3 = 4,
...
FLAG8 = 0x80
} flags;
Then in main just check
if(value & FLAGN)
In C there is no diffrence between 1 and any other number in if statement. It just check if is zero or non-zero number.
And setting is the same:
value = FLAG1 | FLAG2 | FLAG8;
You can also use defines ofc.
And for claryfication, max number of flags for N bit type is N. So you need to have larger type (if compiller supports bigger datatypes) like uint16_t.
C's if statement and logical operators do not make a difference between 1 and other non-zeros (although logical operators produce 1 for true). Therefore, there is no difference between (a & (1 << F3)) >> F3 and a & (1 << F3) in the context of a logical expression: if one evaluates to true, so does the other one, and vice versa. Hence, this should work:
if (a & (1 << F1)) {
// Set bit 5 in register A.
}
Note: I assume you didn't mean to write #define F11 1024, but rather #define F11 10, because you use your Fs as the second operand of <<.

C header to delphi

I have a piece of C code. I need help to translate it to Delphi code.
1)
/*
* Color is packed into 16-bit word as follows:
*
* 15 8 7 0
* XXggbbbb XXrrrrgg
*
* Note that green bits 12 and 13 are the lower bits of green component
* and bits 0 and 1 are the higher ones.
*
*/
#define CLR_RED(spec) (((spec) >> 2) & 0x0F)
#define CLR_GREEN(spec) ((((spec) & 0x03) << 2) | ((spec & 0x3000) >> 12))
#define CLR_BLUE(spec) (((spec) >> 8) & 0x0F)
2)
#define CDG_GET_SCROLL_COMMAND(scroll) (((scroll) & 0x30) >> 4)
#define CDG_GET_SCROLL_HOFFSET(scroll) ((scroll) & 0x07)
#define CDG_GET_SCROLL_VOFFSET(scroll) ((scroll) & 0x0F)
These are parameterized macros. Since Delphi doesn't support these, you'll need to use functions instead, which is cleaner anyways.
>> is a right-shift, shr in Delphi
<< is a left-shift, shl in Delphi
& is "bitwise and", and in Delphi
Delphi uses bitwise operators when working on integers, and logical operators when working on booleans, so there is only one operator and to replace both && and &.
| is "bitwise or", or in Delphi
0x is the prefix for hex literals, $ in Delphi
So #define CLR_GREEN(spec) ((((spec) & 0x03) << 2) | ((spec & 0x3000) >> 12)) becomes something like:
function CLR_GREEN(spec: word):byte;
begin
result := byte(((spec and $03) shl 2) or ((spec and $3000) shr 12));
end;
(I don't have delphi at hand, so there might be minor bugs)
Convert the other macros in a similar manner.

How to break 32 bit value and assign it to three different data types.?

I have a
#define PROT_EN_DAT 0x140
//(320 in decimal)
Its loaded into 64 bit value register(ex setup_data[39:8]=PROT_EN_DATA)
Now i want to put this value(0x140)into
uint8_t bRequest
uint16_t wValue
uint16_t wIndex
How can load the value so that i don't have to manually do it for other values again.
I think we can do with shift operators but don know how.
EDIT:Ya its related to USB. bRequest(8:15),wValue(16:31),wIndex(32:47) but setup_data is 64 bit value.I want to know how can i load proper values into the these fields.
For example say next time i am using #define PROT_EN2_REG 0x1D8.
and say setup_data[39:8]=PROT_EN2_DATA
General read form:
aField = (aRegister >> kBitFieldLSBIndex) & ((1 << kBitFieldWidth) - 1)
General write form:
mask = ((1 << kBitFieldWidth) - 1) << kBitFieldLSBIndex;
aRegister = (aRegister & ~mask) | ((aField << kBitFieldLSBIndex) & mask);
where:
aRegister is the value you read from the bit-field-packed register,
kBitFieldLSBIndex is the index of the least significant bit of the bit field, and
kBitFieldWidth is the width of the bit field, and
aField is the value of the bit field
These are generalized, and some operations (such as bit-masking) may be unnecessary in your case. Replace the 1 with 1L if the register is larger than 32 bits.
EDIT: In your example case (setup_data[39:8]=PROT_EN_DATA):
Read:
aField = (setup_data >> 8) & ((1L << 32) - 1L)
Write:
#define PROT_EN_MASK = (((1L << 32) - 1L) << 8) // 0x0000000FFFFFFFF0
setup_data = (setup_data & ~PROT_EN_MASK) | ((PROT_EN_DATA << 8) & PROT_EN_MASK);

A C preprocessor macro to pack bitfields into a byte?

I'm getting into micro-controller hacking and while I'm very comfortable with bitwise operators and talking right to the hardware, I'm finding the resulting code very verbose and boilerplate. The higher level programmer in me wants to find an effective but efficient way to clean it up.
For instance, there's a lot of setting flags in registers:
/* Provided by the compiler */
#define SPIE 7
#define SPE 6
#define DORD 5
#define MSTR 5
#define CPOL 4
#define CPHA 3
void init_spi() {
SPCR = (1 << SPE) | (1 << SPIE) | (1 << MSTR) | (1 << SPI2X);
}
Thankfully there are macros that hide actual port IO operations (the left hand side), so it looks like a simple assignment. But all that syntax to me is messy.
Requirements are:
it only has to handle up to 8 bits,
the bit positions must be able to be passed in any order, and
should only require set bits to be passed.
The syntax I'd like is:
SPCR = bits(SPE, SPIE, MSTR, SPI2X);
The best I've come up with so far is a combo macro/function:
#define bits(...) __pack_bits(__VA_ARGS__, -1)
uint8_t __pack_bits(uint8_t bit, ...) {
uint8_t result = 0;
va_list args;
va_start(args, bit);
result |= (uint8_t) (1 << bit);
for (;;) {
bit = (uint8_t) va_arg(args, int);
if (bit > 7)
break;
result |= (uint8_t) (1 << bit);
}
}
This compiles to 32 bytes on my particular architecure and takes 61-345 cycles to execute (depends on how many bits were passed).
Ideally this should be done in preprocessor since the result is a constant, and the output machine instructions shouldbe just an assignment of an 8 bit value to a register.
Can this be done any better?
Yea, redefine the macros ABC as 1 << ABC, and you simplify that. ORing together bit masks is a very common idiom that anyone will recognize. Getting the shift positions out of your face will help a lot.
Your code goes from
#define SPIE 7
#define SPE 6
#define DORD 5
#define MSTR 5
#define CPOL 4
#define CPHA 3
void init_spi() {
SPCR = (1 << SPE) | (1 << SPIE) | (1 << MSTR) | (1 << SPI2X);
}
to this
#define BIT(n) (1 << (n))
#define SPIE BIT(7)
#define SPE BIT(6)
#define DORD BIT(5)
#define MSTR BIT(5)
#define CPOL BIT(4)
#define CPHA BIT(3)
void init_spi() {
SPCR = SPE | SPIE | MSTR | SPI2X;
}
This suggestion does assume that the bit-field definitions are used many times more than there are definitions of them.
I feel like there might be some way to use variadic macros for this, but I can't figure on anything that could easily be used as an expression. Consider, however, creating an array literal inside a function generating your constant:
#define BITS(name, ...) \
char name() { \
char[] bits = { __VA_ARGS__ }; \
char byte = 0, i; \
for (i = 0; i < sizeof(bits); ++i) byte |= (1 << bits[i]); \
return byte; }
/* Define the bit-mask function for this purpose */
BITS(SPCR_BITS, SPE, SPIE, MSTR, SPI2X)
void init_spi() {
SPCR = SPCR_BITS();
}
If your compiler is good, it will see that the entire function is constant at compile-time, and inline the resultant value.
Why not create your own definitions in addition to the pre-defined ones...
#define BIT_TO_MASK(n) (1 << (n))
#define SPIE_MASK BIT_TO_MASK(SPIE)
#define SPE_MASK BIT_TO_MASK(SPE)
#define DORD_MASK BIT_TO_MASK(DORD)
#define MSTR_MASK BIT_TO_MASK(MSTR)
#define CPOL_MASK BIT_TO_MASK(CPOL)
#define CPHA_MASK BIT_TO_MASK(CPHA)
void init_spi() {
SPCR = SPE_MASK | SPIE_MASK | MSTR_MASK | SPI2X_MASK;
}

Resources