Using a bitmask and if statement - c

I am trying to allow multiple cases to run in a switch statement. I have a bitmask as follows:
#define SHOOT_ROCKET 2 << 16
#define MOVE_FORWARD 3 << 16
Later, I do
switch (int game_action)
and I have
case SHOOT_ROCKET:
result = fire_weapon(rl);
I don't want to 'break', because I want possibility of multiple actions. But I am returning a value called 'result'. I store this as a variable and return at the end. I can tell other case: statements are running though even when they shouldn't because result keeps getting changed and doesn't if I add break;
What is the best way to deal with this?
Update: I've been told to do if instead.
I changed my << bitmasks so they start at 1 now.
I am experiencing a weird bug
if (game_action->action & SHOOT_ROCKET)
{
game_action->data=5;
}
if (game_action->action & MOVE_FORWARD)
{
game_action->data=64;
}
I am not concerned about game_action being overwritten when I intend for multiple if's to evaluate to true
However: it seems MOVE_FORWARD is happening even if I only try and shoot a rocket!
game_action is a void pointer normally, so this is how it's setup in the function:
game_action = (game_action) *game_action_ptr;
I have verified the bitmask is correct with
printf("%d", game_action->action >> 16) which prints 2.
So why is '3' (the move forward) happening when I am only trying to shoot a rocket?

Please do update your question.
So why is '3' (the move forward) happening when I am only trying to shoot a rocket?
The first thing you want to look at is
#define SHOOT_ROCKET 2 << 16
#define MOVE_FORWARD 3 << 16
and think what that evaluates to. It will evaluate to the following binary numbers (Python is handy for this kind of stuff, 0b is just a prefix that means binary is following):
>>> bin(2 << 16)
'0b100000000000000000'
>>> bin(3 << 16)
'0b110000000000000000'
So you see that you use one bit twice in your #defines (Retired Ninja already pointed this out). This means that if game_action->action is set to anything where the bit 2 << 16 is 1, both of your ifs will evaluate to true, because both #defines have that bit set to 1.
to make them mutually exclusive, should i do 2, 4, 8, instead of 1,2,3,4?
If you want to easily keep track of which bits are used, you can either use powers of two (1,2,4,8,16, etc), e.g. #define MOVE_FORWARD 4 (I'm ignoring the << 16 you have, you can add that if you want), or you can shift a 1 by a variable number of bits, both result in the same binary numbers:
#define MOVE_LEFT 1 << 0
#define MOVE_RIGHT 1 << 1
#define MOVE_UP 1 << 3
#define MOVE_DOWN 1 << 4
There are legitimate cases where bitmasks need to have more than one bit set, for example for checking if any one of several bits are set:
#define MOVE_LEFT ... (as above)
#define SHOOT_ROCKET 1 << 5
#define SHOOT_GUN 1 << 6
//...
#define ANY_MOVEMENT 0xF
#define ANY_WEAPON_USE 0xF << 4
and then check:
if (action & ANY_MOVEMENT) { ... }
if (action & ANY_WEAPON_USE) { ... }

place parens '(' ')' around the value part (2<<15) kind of values so there is no errors introduced by the text replacement.
I.E. this:
'if( (&game_action->action & MOVE_FORWARD) == MOVE_FORWARD)'
becomes
'if( (&game_action->action & 2 << 16) == 2 << 16)'
Note the posted code is missing a left paren, which I added.
Where the '&' has a higher Precedence the '<<' so it (effectively) becomes
'if( ( (&game_action->action & 2) << 16) == 2 << 16)'
where the '&' is done to the 2 and not to the 2<<16

Related

Is there a difference between how integers are interpreted between the Cortex M0 and M3 platforms?

I am moving my build system over to use the CMSIS files for the stm32F1 (from stm32F0) which is a cortex-m3 chip, and I am running into the following error when I try to compile core_cm3.h.
This is the function which is part of the core CMSIS:
static __INLINE uint32_t NVIC_EncodePriority (uint32_t PriorityGroup, uint32_t PreemptPriority, uint32_t SubPriority)
{
uint32_t PriorityGroupTmp = (PriorityGroup & 0x07); /* only values 0..7 are used */
uint32_t PreemptPriorityBits;
uint32_t SubPriorityBits;
PreemptPriorityBits = ((7 - PriorityGroupTmp) > __NVIC_PRIO_BITS) ? __NVIC_PRIO_BITS : 7 - PriorityGroupTmp;
SubPriorityBits = ((PriorityGroupTmp + __NVIC_PRIO_BITS) < 7) ? 0 : PriorityGroupTmp - 7 + __NVIC_PRIO_BITS;
return (
((PreemptPriority & ((1 << (PreemptPriorityBits)) - 1)) << SubPriorityBits) |
((SubPriority & ((1 << (SubPriorityBits )) - 1)))
);
}
Error: conversion to 'long unsigned int' from 'int' may change the sign of the result [-Werror=sign-conversion]
((PreemptPriority & ((1 << (PreemptPriorityBits)) - 1)) << SubPriorityBits)
^
I am surprised there are issues compiling, since this is a core file with no changes made, and up to date. In addition, from what I've read online the same compiler (arm-none-eabi) can be used for the whole 'M' family. Is there some quirk about how integers are interpreted here I'm missing?
The size and representation of integers and long integers is the same between ARMv6M (including CortexM0) and ARMv7M (including CortexM3).
The warning message you are getting is because the compiler thinks (1 << PreemptPriorityBits) could be negative (specifically if PreemptPriorityBits equals 31) but it then gets converted to unsigned type to be masked with PreemptPriority.
In reality PreemptPriorityBits will always be less than or equal to 8, so nothing can ever be negative here. This means that in the short term you can just ignore the warning.
If this warning were in your own code I would advise you to just change 1 << to 1u << which would produce the same binary output but tell the compiler there is nothing to worry about.
In fact I notice that this exact change has already been made in the oldest version of CMSIS I can easily find a copy of (5.0.0 from September 2016). It looks like you are running some very old code against a new version of the compiler that it was not written for.
I strongly suggest that you upgrade to a later version of CMSIS headers. Although this change is non-functional (an identical binary will be output) there have been a great many other changes made in the last 6 years which do matter and some which fix significant bugs.

Why is this line obfuscated?

In this snippet,
if(((RCC_OscInitStruct->OscillatorType) & RCC_OSCILLATORTYPE_HSI) == RCC_OSCILLATORTYPE_HSI)
{
/* statements */
}
the member OscillatorType could have any of the values, or their combination, defined below.
#define RCC_OSCILLATORTYPE_NONE ((uint32_t)0x00000000)
#define RCC_OSCILLATORTYPE_HSE ((uint32_t)0x00000001)
#define RCC_OSCILLATORTYPE_HSI ((uint32_t)0x00000002)
#define RCC_OSCILLATORTYPE_LSE ((uint32_t)0x00000004)
#define RCC_OSCILLATORTYPE_LSI ((uint32_t)0x00000008)
Why is the if written this way? Why not simply like this?
if(RCC_OscInitStruct->OscillatorType == RCC_OSCILLATORTYPE_HSI)
RCC_OscInitStruct->OscillatorType is a collection of bits packed in an integer value, each bit representing one of the values (RCC_OSCILLATORTYPE_HSE, ...). That's why they come in powers of 2. The code you showed just checks if the bit associated with RCC_OSCILLATORTYPE_HSI is set. It's very probable that bits of other values are also set.
For example if the binary representation of OscillatorType is 0...011, the first and second bit is set, meaning that the RCC_OSCILLATORTYPE_HSE and RCC_OSCILLATORTYPE_HSI values are selected.
It's a very common C idiom and not obfuscated in any way. Those are two very different tests.
if ((RCC_OscInitStruct->OscillatorType & RCC_OSCILLATORTYPE_HSI) == RCC_OSCILLATORTYPE_HSI)
says "if the RCC_OSCILLATOR_HSI bit is 1". It doesn't care whether any of the other bits are 0 or 1, whereas
if (RCC_OscInitStruct->OscillatorType == RCC_OSCILLATORTYPE_HSI)
says "if the RCC_OSCILLATOR_HSI bit is 1 AND all the other bits are 0".
Because it may have any of these values at the same time. The & (bitwise AND) operator is serving the purpose of extracting only the value RCC_OSCILLATOR_TYPE_HSI.
As an example, your input may look like this:
010011
While RCC_OSCILLATOR_TYPE_HSI looks like this:
000010
The AND operator with these two values will return 000010, witch exactly equals RCC_OSCILLATOR_TYPE_HSI.
However, if your input looks like this:
110101
The bitwise AND operator between this and RCC_OSCILLATOR_TYPE_HSI will return 0, and the condition will be false.
The if condition is interested only in the second last bit of RCC_OscInitStruct->OscillatorType. So RCC_OSCILLATORTYPE_HSI is used as a mask and then compared to itself.
If you see all the constants, first one is all zeroes where as the others have its set bit at successive positions.
Now, doing & with any of these constants can tell you whether its corresponding bit is set in the said parameter.
If you want to set all of the possible values, you would be doing:
RCC_OscInitStruct->OscillatorType = RCC_OSCILLATORTYPE_HSE | RCC_OSCILLATORTYPE_HSI | RCC_OSCILLATORTYPE_LSE | RCC_OSCILLATORTYPE_LSI;
Why comparison using ==?
That's not required and makes the code cluttered. I think that the programmer wanted to bring uniformity when testing for RCC_OSCILLATORTYPE_NONE.
The programmer can't test for RCC_OscInitStruct->OscillatorType & RCC_OSCILLATORTYPE_NONE because that would evaluate to zero. You are either forced to negate the condition just for this check.
An example:
#include <stdio.h>
#include <stdint.h>
#define RCC_OSCILLATORTYPE_NONE ((uint32_t)0x00000000)
#define RCC_OSCILLATORTYPE_HSE ((uint32_t)0x00000001)
#define RCC_OSCILLATORTYPE_HSI ((uint32_t)0x00000002)
#define RCC_OSCILLATORTYPE_LSE ((uint32_t)0x00000004)
#define RCC_OSCILLATORTYPE_LSI ((uint32_t)0x00000008)
int main(void)
{
/* set HSI and HSE */
uint32_t flags = RCC_OSCILLATORTYPE_HSE | RCC_OSCILLATORTYPE_HSI;
if (flags == RCC_OSCILLATORTYPE_HSI) {
puts("flags = HSI");
}
if ((flags & RCC_OSCILLATORTYPE_HSI) == RCC_OSCILLATORTYPE_HSI) {
puts("HSI is set in flags");
}
return 0;
}
Output:
HSI is set in flags
To begin with, == is not equivalent with &. Because == looks at the whole 32 bit register, including any crap you aren't interested in. While & just looks at the relevant parts.
And the & is simple binary arithmetic for bitwise AND. In my opinion, you need to understand binary numbers before even enrolling your first programmer course, but maybe that's just me.
Anyway, given that you actually understand what bitwise AND does, it would have made more sense if you had code like
#define RCC_OSCILLATORTYPE ((uint32_t)0x0000000F) // mask
#define RCC_OSCILLATORTYPE_NONE ((uint32_t)0x00000000)
#define RCC_OSCILLATORTYPE_HSE ((uint32_t)0x00000001)
#define RCC_OSCILLATORTYPE_HSI ((uint32_t)0x00000002)
#define RCC_OSCILLATORTYPE_LSE ((uint32_t)0x00000004)
#define RCC_OSCILLATORTYPE_LSI ((uint32_t)0x00000008)
...
(RCC_OscInitStruct->OscillatorType & RCC_OSCILLATORTYPE) == RCC_OSCILLATORTYPE_HSI
Maybe this is what the programmer of that code intended, but they didn't quite manage to bring the code all the way there.

How to Interpret typedef enum property on MCOIMAPMessage

My question is mostly about how interpret a typedef enum, but here is the background:
I am using MailCore2, and I am trying to figure out how to read the flags off of an individual email object that I am fetching.
Each MCOIMAPMessage *email that I fetch has a property on it called 'flags.' Flags is of type MCOMessageFlag. When I look up the definition of MCOMessageFlag, I find that it is a typedef enum:
typedef enum {
MCOMessageFlagNone = 0,
/** Seen/Read flag.*/
MCOMessageFlagSeen = 1 << 0,
/** Replied/Answered flag.*/
MCOMessageFlagAnswered = 1 << 1,
/** Flagged/Starred flag.*/
MCOMessageFlagFlagged = 1 << 2,
/** Deleted flag.*/
MCOMessageFlagDeleted = 1 << 3,
/** Draft flag.*/
MCOMessageFlagDraft = 1 << 4,
/** $MDNSent flag.*/
MCOMessageFlagMDNSent = 1 << 5,
/** $Forwarded flag.*/
MCOMessageFlagForwarded = 1 << 6,
/** $SubmitPending flag.*/
MCOMessageFlagSubmitPending = 1 << 7,
/** $Submitted flag.*/
MCOMessageFlagSubmitted = 1 << 8,
} MCOMessageFlag;
Since I do not know how typedef enums really work - particularly this one with the '= 1 << 8' type components, I am a little lost about how to read the emails' flags property.
For example, I have an email message that has both an MCOMessageFlagSeen and an MCOMessageFlagFlagged on the server. I'd like to find out from the email.flags property whether or not the fetched email has one, both or neither of these flags (if possible). However, in the debugger when I print 'email.flags' for an email that has both of the above flags, I get back just the number 5. I don't see how that relates to the typedef enum definitions above.
Ultimately, I want to set a BOOL value based on whether or not the flag is present. In other words, I'd like to do something like:
BOOL wasSeen = email.flags == MCOMessageFlagSeen;
BOOL isFlagged = email.flags == MCOMessageFlagFlagged;
Of course this doesn't work, but this is the idea. Can anyone suggest how I might accomplish this and/or how to understand the typedef enum?
These flags are used as in a bitmask.
This allows to store multiple on/off flags in a single numeric type (let it be an unsigned char or an unsigned int). Basically if a flag is set then its corresponding bit is set too.
For example:
MCOMessageFlagMDNSent = 1 << 5
1<<5 means 1 shifted to the left by 5 bits, so in binary:
00000001 << 5 = 00100000
This works only if no flag overlaps with other flags, which is typically achieved by starting with 1 and shifting it to the left by a different amount for every flag.
Then to check if a flag is set you check if the corresponding bit is set, eg:
if (flags & MCOMessageFlagMDNSent)
result will be true if the bitwise AND result is different from zero, this can happen only if the corresponding bit is set.
You can set a flag easily with OR:
flags |= MCOMessageFlagMDNSent;
or reset it with AND:
flags &= ~MCOMessageFlagMDNSent;
The values of the enum represent the individual bits, so you need bitwise operations to check for flags:
BOOL wasSeen = ( email.flags & MCOMessageFlagSeen ); // check if a bit was set
BTW: You code seems to suggest this is C, not C++. Tagging a question is both is almost always wrong, I suggest you pick the language you are using and remove the other tag.

What does this enum mean?

I saw this line of code today and had no idea what it does.
typedef enum {
SomeOptionKeys = 1 << 0 // ?
} SomeOption;
Some usage or example would be helpful. Thanks!
It looks like it defines an enumerated type that is supposed to contain a set of flags. You'd expect to see more of them defined, like this:
typedef enum {
FirstOption = 1 << 0,
SecondOption = 1 << 1,
ThirdOption = 1 << 2
} SomeOption;
Since they are defined as powers of two, each value corresponds to a single bit in an integer variable. Thus, you can use the bitwise operators to combine them and to test if they are set. This is a common pattern in C code.
You could write code like this that combines them:
SomeOption myOptions = FirstOption | ThirdOption;
And you could check which options are set like this:
if (myOptions & ThirdOption)
{
...
}
The value of SomeOptionKeys is one, this is a useful representation when working with flags:
typedef enum {
flag1 = 1 << 0, // binary 00000000000000000000000000000001
flag2 = 1 << 1, // binary 00000000000000000000000000000010
flag3 = 1 << 2, // binary 00000000000000000000000000000100
flag4 = 1 << 3, // binary 00000000000000000000000000001000
flag5 = 1 << 4, // binary 00000000000000000000000000010000
// ...
} SomeOption;
Whit way each flag has only one bit set, and they could be represented in a bitmap.
Edit:
Although, I have to say, that I might be missing something, but it seems redundent to me to use enums for that. Since you lose any advantage of enums in this configuration, you may as well use #define:
#define flag1 (1<<0)
#define flag2 (1<<1)
#define flag3 (1<<2)
#define flag4 (1<<3)
#define flag5 (1<<4)
It just sets the enum to the value 1. It is probably intended to indicate that the values are to be powers of 2. The next one would maybe be assigned 1 << 1, etc.
<< is the left shift operator. In general, this is used when you want your enums to mask a single bit. In this case, the shift doesn't actually do anything since it's 0, but you might see it pop up in more complex cases.
An example might look like:
typedef enum {
OptionKeyA = 1<<0,
OptionKeyB = 1<<1,
OptionKeyC = 1<<2,
} OptionKeys;
Then if you had some function that took an option key, you could use the enum as a bitmask to check if an option is set.
int ASet( OptionKeys x){
return (x & OptionKeyA);
}
Or if you had a flag bitmap and wanted to set one option:
myflags | OptionKeyB

Can I call a "function-like macro" in a header file from a CUDA __global__ function?

This is part of my header file aes_locl.h:
.
.
# define SWAP(x) (_lrotl(x, 8) & 0x00ff00ff | _lrotr(x, 8) & 0xff00ff00)
# define GETU32(p) SWAP(*((u32 *)(p)))
# define PUTU32(ct, st) { *((u32 *)(ct)) = SWAP((st)); }
.
.
Now from the .cu file I have declared a __ global__ function and included the header file like this :
#include "aes_locl.h"
.....
__global__ void cudaEncryptKern(u32* _Te0, u32* _Te1, u32* _Te2, u32* _Te3, unsigned char* in, u32* rdk, unsigned long* length)
{
u32 *rk = rdk;
u32 s0, s1, s2, s3, t0, t1, t2, t3;
s0 = GETU32(in + threadIdx.x*(i) ) ^ rk[0];
}
This leads me to the following error message:
error: calling a host function from a __ device__/__ global__ function is only allowed in device emulation mode
I have sample code where the programmer calls the macro exactly in that way.
Can I call it in this way, or is this not possible at all? If it is not, I will appreciate some hints of what would be the best approach to rewrite the macros and assign the desired value to S0.
thank you very much in advance!!!
I think the problem is not the macros themselves - the compilation process used by nvcc for CUDA code runs the C preprocessor in the usual way and so using header files in this way should be fine. I believe the problem is in your calls to _lrotl and _lrotr.
You ought to be able to check that that is indeed the problem by temporarily removing those calls.
You should check the CUDA programming guide to see what functionality you need to replace those calls to run on the GPU.
The hardware doesn't have a built-in rotate instruction, and so there is no intrinsic to expose it (you can't expose something that doesn't exist!).
It's fairly simple to implement with shifts and masks though, for example if x is 32-bits then to rotate left eight bits you can do:
((x << 8) | (x >> 24))
Where x << 8 will push everything left eight bits (i.e. discarding the leftmost eight bits), x >> 24 will push everything right twnty-four bits (i.e. discarding all but the leftmost eight bits), and bitwise ORing them together gives the result you need.
// # define SWAP(x) (_lrotl(x, 8) & 0x00ff00ff | _lrotr(x, 8) & 0xff00ff00)
# define SWAP(x) (((x << 8) | (x >> 24)) & 0x00ff00ff | ((x >> 8) | (x << 24)) & 0xff00ff00)
You could of course make this more efficient by recognising that the above is overkill:
# define SWAP(x) (((x & 0xff00ff00) >> 8) | ((x & 0x00ff00ff) << 8))
The error says what the problem really is. You are calling a function/macro defined in another file (which belongs to the CPU code), from inside the CUDA function. This is impossible!
You cannot call CPU functions/macros/code from a GPU function.
You should put your definitions (does _lrotl() exist in CUDA?) inside the same file that will be compiled by nvcc.

Resources