Use of ampersands in C if statement criteria - c

I'm new to C, and am trying to make sense of some code from NREL available here so that I may program a similar function in R. Here's the part of the code I cannot seem to figure out:
long S_solpos (struct posdata *pdat)
{
if ( pdat->function & L_DOY )
doy2dom( pdat );
}
In particular, what is the evaluation criteria asking in:
if ( pdat->function & L_DOY )
I understand that pdat is a is a pointer to the posdata structure, and from the header file I know that that "function" is a variable in the posdata structure which contains various integer codes:
struct posdata
{
int function;
and that L_DOY can be one such function:
/*Define the function codes*/
#define L_DOY 0x0001
#define L_GEOM 0x0002
#define L_ZENETR 0x0004
I would assume that the if statement is checking whether the function variable within pdat corresponds to the code for L_DOY. However, I am still very new to C, and have been unable to find any examples or explanations that utilize the ampersand in an if statement like this.
Thanks in advance for any help.

It means bitwise-and. The value it's testing is a set of bit flags that can have one or more set. It's checking whether the L_DOY flag specifically is set, because bitwise-and keeps bits that appear in both operands, so 0b0101 & 0b0011 would produce 0b0001 (the only bit set in both operands). Since L_DOY is only a single bit, the low bit, it's checking if that bit is set in function; it doesn't care if other bits are set, or not.

Related

How does the condition (type & ~(R_OK|W_OK|X_OK|F_OK)) work in C?

What kind of condition is used here and how does it works in C?
(type & ~(R_OK|W_OK|X_OK|F_OK))
Found it here.
/* Test for access to FILE. */
int
__access (const char *file, int type)
{
if (file == NULL || (type & ~(R_OK|W_OK|X_OK|F_OK)) != 0)
{
__set_errno (EINVAL);
return -1;
}
__set_errno (ENOSYS);
return -1;
}
stub_warning (access)
https://code.woboq.org/userspace/glibc/io/access.c.html
The expression uses bitwise arithmetic.
a | b | c …  creates a value that has all the bits of a, b, c … set.
In your piece of code, R_OK etc. are bit flags that each have a single, distinct bit set. Their disjunction (= or-ing them together) thus has all their bits set, and none other.
~ x inverts the bits of a value. Thus, the result of the operation has all bits set except those of R_OK etc.
Finally, a & b sets only those bits which are set in both a and b. All other bits will be 0.
The expression, taken together, thus tests whether the variable type has any bits set which are not defined by R_OK etc. In other words: it tests whether test’s value is one of R_OK etc., or a combination of these values. If that is not the case (i.e. if it has some other value), the test fails.
The function you’ve posted thus test whether it has received valid arguments (i.e. that file is not NULL, and that test is a valid combination of supported flags). Beyond this, the function does nothing except set an error status and return -1. And the reason for this weird behaviour can be seen in the last line: the function you’ve posted is a stub, it does not actually implement a proper POSIX access function.

Why I cannot store into a variable the return result of an logic expression in C code?

I have a strange behavior of a small piece of C code.
I want to store the result of a boolean expression in a variable but it seems not to work.
Following the code:
#define rtCP_Constant_Value_fklq (uint8_t) 1 //Simulink const
#define rtCP_Constant_Value (uint8_t) 0 //Simulink const
uint16_t rtb_tobit;
volatile unsigned char rtb_y;
uint8_t asr_ena_=14;
rtb_tobit = (1 << rtCP_Constant_Value_fklq);
uint8_t temp = ((uint8_t)rtb_tobit) & asr_ena_;
rtb_y = (temp !=(rtCP_Constant_Value));
I have tested this snippet of code with two compilers, Renesas SH 9_4_1 and gcc-arm non-eabi in an Nucleo eval board.
In both of them the variable rtb_y is always zero.
The debugger shows that the expression (temp !=(rtCP_Constant_Value)) is true but I cannot understand why, the variable rtb_y is always equal to zero.
Could someone explain me why? Is this strange behaviour because of C standards, which I used?
It is a really bad idea to use macros in the way you are using them. You need to be very careful, generally, to use parenthesis in the right places. Also you should not include the ; in the macro. For example, this is better:
#define rtCP_Constant_Value_fklq ((uint8_t) 1) //Simulink const
However, it is not really possible to offer any more help for your question than this, because your example will not compile, due to the inclusion of the ;. If you update the question with code that compiles, it may be possible to help further.

MACRO that converts enum to type in c

I'm trying this for over a week with no success.
I'm creating a logger interface between two processors and I need help with defining automated MACROS.
What do I mean?
Let's say I have a logger message defined as LOGGER_MSG_ID_2 that takes two parameter of uint8 and uint16 types.
I have an enum defined as:
typedef enum{
PARAM_NONE,
PARAM_SIZE_UINT8,
PARAM_SIZE_UINT16,
PARAM_SIZE_UINT32
}paramSize_e;
So LOGGER_MSG_ID_2 will have a bitmap defined as:
#define LOGGER_MSG_ID_2_BITMAP (PARAM_SIZE_UINT16 << 2 | PARAM_SIZE_UINT8)
This bitmap is 1 Byte size, so the maximum number of parameters is 4.
Later on I have a list that defines all parameters type according to message ID:
#define ID_2_P0_TYPE uint8 // first parameter
#define ID_2_P1_TYPE uint16 // 2nd parameter
#define ID_2_P2_TYPE 0 // 3rd parameter
#define ID_2_P3_TYPE 0 // 4th parameter
As I said, I have a limitation of 4 parameters, so I would like to define them and let the MACRO decide weather to use them or not. I defined them as 0 but it can be whatever that works.
I have other MACROS that uses the bitmap to get all kind of attributes, such as number of parameters and message size.
Now it's the tricky part. I want to build a MACRO that creates a bitmap from types. The reason is that I don't want redundancy between the bitmap and parameters definitions.
My problem is that everything I tried failed to compile.
Eventually I would like to have a MACRO such as:
#define GET_ENUM_FROM_TYPE(_type)
that gives me PARAM_SIZE_UINT8, PARAM_SIZE_UINT16 or PARAM_SIZE_UINT32 according to type.
Limitations: I'm using arm compiler on windows (armcl.exe) and C99. I can't use C11 Generic().
I tried the following:
#define GET_ENUM_FROM_TYPE(_type) \
(_type == uint8) ? PARAM_SIZE_UINT8 : \
((_type == uint16) ? PARAM_SIZE_UINT16 : \
((_type == uint32) ? PARAM_SIZE_UINT32 : PARAM_NONE))
Eventually I want to use it like:
#define LOGGER_MSG_ID_2_BITMAP \
(GET_ENUM_FROM_TYPE(ID_2_P3_TYPE) << 6 | \
GET_ENUM_FROM_TYPE(ID_2_P2_TYPE) << 4 | \
GET_ENUM_FROM_TYPE(ID_2_P1_TYPE) << 2 | \
GET_ENUM_FROM_TYPE(ID_2_P0_TYPE))
But when I use it, it doesn't compile.
I have a table of bitmaps:
uint8 paramsSizeBitmap [] = {
LOGGER_MSG_ID_1_BITMAP, /* LOGGER_MSG_ID_1 */
LOGGER_MSG_ID_2_BITMAP, /* LOGGER_MSG_ID_2 */
LOGGER_MSG_ID_3_BITMAP, /* LOGGER_MSG_ID_3 */
LOGGER_MSG_ID_4_BITMAP, /* LOGGER_MSG_ID_4 */
LOGGER_MSG_ID_5_BITMAP, /* LOGGER_MSG_ID_5 */
LOGGER_MSG_ID_6_BITMAP, /* LOGGER_MSG_ID_6 */
LOGGER_MSG_ID_7_BITMAP, /* LOGGER_MSG_ID_7 */
LOGGER_MSG_ID_8_BITMAP, /* LOGGER_MSG_ID_8 */
LOGGER_MSG_ID_9_BITMAP, /* LOGGER_MSG_ID_9 */
LOGGER_MSG_ID_10_BITMAP, /* LOGGER_MSG_ID_10 */
};
And I get this error:
line 39: error #18: expected a ")"
line 39: error #29: expected an expression
(line 39 is LOGGER_MSG_ID_2_BITMAP)
Where do I go wrong?
----- Edit -----
For now I have a workaround that I don't really like.
I don't use uint64 so I made a use of sizeof() MACRO and now my MACRO looks like this:
#define GET_ENUM_FROM_TYPE(_type) \
(sizeof(_type) == sizeof(uint8)) ? PARAM_SIZE_UINT8 : \
((sizeof(_type) == sizeof(uint16)) ? PARAM_SIZE_UINT16 : \
((sizeof(_type) == sizeof(uint32)) ? PARAM_SIZE_UINT32 : PARAM_NONE))
and my paraemters list is:
#define NO_PARAM uint64
#define ID_2_P0_TYPE uint8
#define ID_2_P1_TYPE uint16
#define ID_2_P2_TYPE NO_PARAM
#define ID_2_P3_TYPE NO_PARAM
It works fine but... you know...
I believe the solution is to use concatenation operator ##, and helper defines.
// These must match your enum
#define HELPER_0 PARAM_NONE
#define HELPER_uint8 PARAM_SIZE_UINT8
#define HELPER_uint16 PARAM_SIZE_UINT16
#define HELPER_uint32 PARAM_SIZE_UINT32
// Secondary macro to avoid expansion to HELPER__type
#define CONCAT(a, b) a ## b
// Outer parenthesis not strictly necessary here
#define GET_ENUM_FROM_TYPE(_type) (CONCAT(HELPER_, _type))
With that GET_ENUM_FROM_TYPE(ID_2_P1_TYPE) will expand to (PARAM_SIZE_UINT16) after preprocessing.
Note that suffix in HELPER_*** defines has to match exactly the content of ID_*_P*_TYPE macros. For example HELPER_UINT8 won't work (invalid case). (Thanks #cxw)
The basic problem is that == is not supported for types, only for values. Given
uint8 foo;
you can say foo==42 but not foo == uint8. This is because types are not first class in C.
One hack would be to use the C preprocessor stringification operator # (gcc docs). However, this moves all your computation to runtime and may not be suitable for an embedded environment. For example:
#define GET_ENUM_FROM_TYPE(_type) ( \
(strcmp(#_type, "uint8")==0) ? PARAM_SIZE_UINT8 : \
((strcmp(#_type, "uint16")==0) ? PARAM_SIZE_UINT16 : \
((strcmp(#_type, "uint32")==0) ? PARAM_SIZE_UINT32 : PARAM_NONE)) \
)
With that definition,
GET_ENUM_FROM_TYPE(uint8)
expands to
( (strcmp("uint8", "uint8")==0) ? PARAM_SIZE_UINT8 : ((strcmp("uint8", "uint16")==0) ? PARAM_SIZE_UINT16 : ((strcmp("uint8", "uint32")==0) ? PARAM_SIZE_UINT32 : PARAM_NONE)) )
which should do what you want, although at runtime.
Sorry, this doesn't directly answer the question. But you should reconsider this whole code.
First of all, _Generic would have solved this elegantly.
The dirty alternative to untangle groups of macros like these, would be to use so-called X macros, which are perfect for cases such as "I don't want redundancy between the bitmap and parameters definitions". You can likely rewrite your code with X macros and get rid of a lot of superfluous defines and macros. How readable it will end up is another story.
However, whenever you find yourself this deep inside some macro meta-programming jungle, it is almost always a certain indication of poor program design. All of this smells like an artificial solution to a problem that could have been solved in much better ways - it is a "XY problem". (not to be confused with X macros :) ). The best solution most likely involves rewriting this entirely in simpler ways. Your case doesn't sound unique in any way, it seems that you just want to generate a bunch of bit masks.
Good programmers always try to make their code simpler, rather than making it more complex.
In addition, you may have more or less severe bugs all over the code, caused by the C language type system. It can be summarized as:
Bit shifts or other bitwise arithmetic should never be used on signed types. Doing so can lead to all manner of subtle bugs and poorly-defined behavior.
Enumeration constants are always of type int which is signed. You should avoid mixing them with bitwise arithmetic. Avoid enums entirely for programs like this.
Small integer types such as uint8_t or uint16_t get implicitly type promoted to int when used in an expression. Meaning that the C language will bandwagon most of your attempts to get the correct type and replace everything with int anyway.
The resulting type of your macros will be int, which is not what you want.

The difference in these 2 snippets of code

Here are 2 snippets of code, one is a macro and one is a function. They seem to do the same thing but after running them it seems that they exhibit different behavior and I don't know why. Could anyone help me please? Thanks!
#define ROL(a, offset) ((((Lane)a) << ((offset) % LANE_BIT_SIZE)) ^ (((Lane)a) >> (LANE_BIT_SIZE-((offset) % LANE_BIT_SIZE))))
Lane rotateLeft(Lane lane, int rotateCount)
{
return ((Lane)lane << (rotateCount % LANE_BIT_SIZE)) ^ ((Lane)lane >> (LANE_BIT_SIZE - (rotateCount % LANE_BIT_SIZE))) ;
}
Note: the Lane type is just an unsigned int and LANE_BIT_SIZE is a number representing the size of Lane in terms of No. of bits.
Think of using a macro as substituting the body of the macro into the place you're using it.
As an example, suppose you were to define a macro: #define quadruple(a) ((a) * (a) * (a) * (a))
... then you were to use that macro like so:
int main(void) {
int x = 1;
printf("%d\n", quadruple(x++));
}
What would you expect to happen here? Substituting the macro into the code results in:
int main(void) {
int x = 1;
printf("%d\n", ((x++) * (x++) * (x++) * (x++)));
}
As it turns out, this code uses undefined behaviour because it modifies x multiple times in the same expression. That's no good! Do you suppose this could account for your difference in behaviour?
one is macro and other one is function, the simple understanding gives difference in the way it will be called.
As in case of function CONTEXT SWITCHING will be there, you code flow will be changed to the calling function and will return eventually so there will be very small delay in execution when compared to MACRO.
other than that there should not be any other difference.
Please try by declaring the function as inline function then both should be same.
Lane may be promoted to a type with more bits, e.g. when it's an unsigned char or unsigned short, or when it is used in a larger assignment with mixed types. The <<operation will then shift the higher bits into the additional bits of the larger type.
With the function call these bits will be just cut off, because it returns a Lane, while the macro gives you the full result of the promoted type, including the additional bits - beside the other problems of macros, like multiple evaluations of the arguments.
Here are 2 snippets of code, one is a macro and one is a function.
They seem to do the same thing but after running them it seems that
they exhibit different behavior and I don't know why.
No they are doing the same thing.
ROL(a, offset); //does a*(2^offset)
rotateLeft(Lane lane, int rotateCount); //does lane*(2^rotateCount)
The only difference is that ROL is implemented through a macro , and rotateLeft() is a function.
Differences between Macros and functions
Macros are executed in the preprocessing stage of the compiler ,
whereas function executes , when it is called at runtime execution.
As a result Macros execute faster than functions , but when called
multiple times , the macro text is substitutes same code redundantly, and they end up consuming more "code" memory than an implementation using functions.
Unlike a function , there is no Type Enforcement in a macro.

How to neatly avoid C casts losing truth

I'm quite happy that, in C, things like this are bad code:
(var_a == var_b) ? TRUE : FALSE
However, what's the best way of dealing with this:
/* Header stuff */
#define INTERESTING_FLAG 0x80000000
typedef short int BOOL;
void func(BOOL);
/* Code */
int main(int argc, char *argv[])
{
unsigned long int flags = 0x00000000;
... /* Various bits of flag processing */
func(flags & INTERESTING_FLAG); /* func never receives a non-zero value
* as the top bits are cut off when the
* argument is cast down to a short
* int
*/
}
Is it acceptable (for whatever value of acceptable you're using) to have (flags & FLAG_CONST) ? TRUE : FALSE?
I would in either case called func with (flags & INTERESTING_FLAG) != 0 as an argument to indicate that a boolean parameter is required and not the arithmetic result of flags & INTERESTING_FLAG.
I'd prefer (flags & CONST_FLAG) != 0. Better still, use the _Bool type if you have it (though it's often disguised as bool).
Set your compiler flags as anally as possible, to warn you of any cast that loses bits, and treat warnings as errors.
Some people don't like it, but I use !!.
ie
!!(flags & CONST_FLAG)
(not as a to_bool macro as someone else suggested, just straight in the code).
If more people used it, it wouldn't be seen as unusual so start using it!!
This may not be a popular solution, but sometimes macros are useful.
#define to_bool(x) (!!(x))
Now we can safely have anything we want without fear of overflowing our type:
func(to_bool(flags & INTERESTING_FLAG));
Another alternative might be to define your boolean type to be an intmax_t (from stdint.h) so that it's impossible for a value to be truncated into falseness.
While I'm here, I want to say that you should be using a typedef for defining a new type, not a #define:
typedef short Bool; // or whatever type you end up choosing
Some might argue that you should use a const variable instead of a macro for numeric constants:
const INTERESTING_FLAG = 0x80000000;
Overall there are better things you can spend your time on. But macros for typedefs is a bit silly.
You could avoid this a couple different ways:
First off
void func(unsigned long int);
would take care of it...
Or
if(flags & INTERESTING_FLAG)
{
func(true);
}
else
{
func(false);
}
would also do it.
EDIT: (flags & INTERESTING_FLAG) != 0 is also good. Probably better.
This is partially off topic:
I'd also create a help function that makes it obvious to the reader what the purpose of the check is so you don't fill your code with this explicit flag checking all over the place. Typedefing the flag type would make it easier to change flag type and implementation later.
Modern compilers supports the inline keyword that can get rid of the performance overhead in a function call.
typedef unsigned long int flagtype;
...
inline bool hasInterestingFlag(flagtype flags) {
return ((flags & INTERESTING_FLAG) != 0);
}
Do you have anything against
flags & INTERESTING_FLAG ? TRUE : FALSE
?
This is why you should only use values in a "boolean" way when these values have explicitly boolean semantics. Your value does not satisfy taht rule, since it has a pronounced integer semantics (or, more precisely, bit-array semantics). In order to convert such a value to boolean, compare it to 0
func((flags & INTERESTING_FLAG) != 0);

Resources