I'm getting into micro-controller hacking and while I'm very comfortable with bitwise operators and talking right to the hardware, I'm finding the resulting code very verbose and boilerplate. The higher level programmer in me wants to find an effective but efficient way to clean it up.
For instance, there's a lot of setting flags in registers:
/* Provided by the compiler */
#define SPIE 7
#define SPE 6
#define DORD 5
#define MSTR 5
#define CPOL 4
#define CPHA 3
void init_spi() {
SPCR = (1 << SPE) | (1 << SPIE) | (1 << MSTR) | (1 << SPI2X);
}
Thankfully there are macros that hide actual port IO operations (the left hand side), so it looks like a simple assignment. But all that syntax to me is messy.
Requirements are:
it only has to handle up to 8 bits,
the bit positions must be able to be passed in any order, and
should only require set bits to be passed.
The syntax I'd like is:
SPCR = bits(SPE, SPIE, MSTR, SPI2X);
The best I've come up with so far is a combo macro/function:
#define bits(...) __pack_bits(__VA_ARGS__, -1)
uint8_t __pack_bits(uint8_t bit, ...) {
uint8_t result = 0;
va_list args;
va_start(args, bit);
result |= (uint8_t) (1 << bit);
for (;;) {
bit = (uint8_t) va_arg(args, int);
if (bit > 7)
break;
result |= (uint8_t) (1 << bit);
}
}
This compiles to 32 bytes on my particular architecure and takes 61-345 cycles to execute (depends on how many bits were passed).
Ideally this should be done in preprocessor since the result is a constant, and the output machine instructions shouldbe just an assignment of an 8 bit value to a register.
Can this be done any better?
Yea, redefine the macros ABC as 1 << ABC, and you simplify that. ORing together bit masks is a very common idiom that anyone will recognize. Getting the shift positions out of your face will help a lot.
Your code goes from
#define SPIE 7
#define SPE 6
#define DORD 5
#define MSTR 5
#define CPOL 4
#define CPHA 3
void init_spi() {
SPCR = (1 << SPE) | (1 << SPIE) | (1 << MSTR) | (1 << SPI2X);
}
to this
#define BIT(n) (1 << (n))
#define SPIE BIT(7)
#define SPE BIT(6)
#define DORD BIT(5)
#define MSTR BIT(5)
#define CPOL BIT(4)
#define CPHA BIT(3)
void init_spi() {
SPCR = SPE | SPIE | MSTR | SPI2X;
}
This suggestion does assume that the bit-field definitions are used many times more than there are definitions of them.
I feel like there might be some way to use variadic macros for this, but I can't figure on anything that could easily be used as an expression. Consider, however, creating an array literal inside a function generating your constant:
#define BITS(name, ...) \
char name() { \
char[] bits = { __VA_ARGS__ }; \
char byte = 0, i; \
for (i = 0; i < sizeof(bits); ++i) byte |= (1 << bits[i]); \
return byte; }
/* Define the bit-mask function for this purpose */
BITS(SPCR_BITS, SPE, SPIE, MSTR, SPI2X)
void init_spi() {
SPCR = SPCR_BITS();
}
If your compiler is good, it will see that the entire function is constant at compile-time, and inline the resultant value.
Why not create your own definitions in addition to the pre-defined ones...
#define BIT_TO_MASK(n) (1 << (n))
#define SPIE_MASK BIT_TO_MASK(SPIE)
#define SPE_MASK BIT_TO_MASK(SPE)
#define DORD_MASK BIT_TO_MASK(DORD)
#define MSTR_MASK BIT_TO_MASK(MSTR)
#define CPOL_MASK BIT_TO_MASK(CPOL)
#define CPHA_MASK BIT_TO_MASK(CPHA)
void init_spi() {
SPCR = SPE_MASK | SPIE_MASK | MSTR_MASK | SPI2X_MASK;
}
Related
I just started learning about embedded systems and I'm having a bit of trouble with correctly setting up the led pin on my stm32f746ng-discovery board. I am not sure if I am not typecasting correctly or have the wrong address for the pin however, I do believe I have addressed everything correctly and I'm not seeing a value change for the GPIO output data register in the watch window which leads me to believe there might be an issue with my code.
To define the registers and their respective addresses this is the approach I took:
//Referring to STM32F746xx Memory Map and Register Boundary Addresses:
#define PERIPH_BASE (0x40000000UL)
#define AHB1PERIPH_OFFSET (0x00020000UL)
#define AHB1PERIPH_BASE (PERIPH_BASE + AHB1PERIPH_OFFSET)
#define GPIOI_OFFSET (0x2000UL)
#define GPIOI_BASE (AHB1PERIPH_BASE + GPIOI_OFFSET)
#define RCC_OFFSET (0x3800UL)
#define RCC_BASE (AHB1PERIPH_BASE + RCC_OFFSET)
#define RCC_AHB1EN_R_OFFSET (0x30UL)
#define RCC_AHB1EN_R (*(volatile unsigned int *)(RCC_BASE + RCC_AHB1EN_R_OFFSET)) //register
#define MODE_R_OFFSET (0x00UL)
#define GPIOI_MODE_R (*(volatile unsigned int *)(GPIOI_BASE + MODE_R_OFFSET)) //register
#define OD_R_OFFSET (0x14UL)
#define GPIOI_OD_R (*(volatile unsigned int *)(GPIOI_BASE + OD_R_OFFSET)) //register
#define GPIOIEN (1U << 0)
#define PIN_1 (1U << 1)
#define LED_PIN PIN_1
The above hex addresses I located from the stm32f746xx datasheet's memory map/table and RM0385 reference manual for the stm32f74xxx.
The code below is the main function where I try to change the bit value of the GPIOI_OD_R register:
int main(void)
{
/* 1. Enable clock access for GPIOI.*/
/* 1.1 I use the OR operator to only change the first bit instead of the whole 32bit chain. */
RCC_AHB1EN_R |= GPIOIEN;
/* 2. Sets PIN_1 as output.*/
GPIOI_MODE_R |= (1U << 2);
GPIOI_MODE_R &=~(1U << 3);
while(1)
{
/* 3. Sets PIN_1 high */
GPIOI_OD_R |= LED_PIN;
}
}
The problem that I am having is that the bit value for the GPIOI_OD_R register is not updating correctly and gets set to 00 instead of 01 which is the required value for the GPIOI Pin_1 (LED) to be set to the general-purpose output mode.
The above parameters I got from the RM0385 reference manual for the stm32f74xxx which can be seen in the image below:
However, when running the code the GPIOI_MODE_R and the GPIOI_OD_R bit values do not change which can be seen in the image below:
I need the values of the registers to be correct to set the LED PIN high on my stm32f746ng-discovery board.
I tried combining the GPIOI_MODE_R setting operation into a single one: GPIOI_MODE_R = (GPIOI_MODE_R | (1U << 2)) & ~(1U << 3) however that causes the program to loose connection with the debugger.
I am using the STM32CubeIDE with the following MCU GCC Compiler Settings:
Thanks in advance and if have referenced something incorrectly please excuse me im new to embedded systems.
I found the issue and realised that I was addressing the wrong bit value for the GPIOIEN. Instead of looking at the bit address for GPIOIEN with:#define GPIOIEN (1U << 8) I was making the mistake to look at the bit address for GPIOAEN with:#define GPIOIEN (1U << 0).
A super silly mistake from my side however I think it might be a mistake that many beginners like myself may make. The only advice that I can give from what I experienced and the process it took to solve the issue is to try and be extra accurate and properly focus when reading through the various board's references manuals and datasheets. When it comes to solving the issue I would say it is important to stay consistent with your code which makes it much easier during the debugging process because I followed this methodology I was able to trace each srep of my code and compare what I expected the values to be versus what I was actually getting.
The final code solution I have attached below:
//Referring to STM32F746xx Memory Map and Register Boundary Addresses:
#define PERIPH_BASE (0x40000000UL)
#define AHB1PERIPH_OFFSET (0x00020000UL)
#define AHB1PERIPH_BASE (PERIPH_BASE + AHB1PERIPH_OFFSET)
#define GPIOI_OFFSET (0x2000UL)
#define GPIOI_BASE (AHB1PERIPH_BASE + GPIOI_OFFSET)
#define RCC_OFFSET (0x3800UL)
#define RCC_BASE (AHB1PERIPH_BASE + RCC_OFFSET)
#define RCC_AHB1EN_R_OFFSET (0x30UL)
#define RCC_AHB1EN_R (*(volatile unsigned int *)(RCC_BASE + RCC_AHB1EN_R_OFFSET))
#define MODE_R_OFFSET (0x00UL)
#define GPIOI_MODE_R (*(volatile unsigned int *)(GPIOI_BASE + MODE_R_OFFSET))
#define OD_R_OFFSET (0x14UL)
#define GPIOI_OD_R (*(volatile unsigned int *)(GPIOI_BASE + OD_R_OFFSET))
#define GPIOIEN (1U << 8) // updated from (1U << 0)
#define PIN_1 (1U << 1)
#define LED_PIN PIN_1
int main(void)
{
/* 1. Enable clock access for GPIOI.*/
/* 1.1 I use the OR operator to only change the first bit instead of the whole 32bit chain. */
RCC_AHB1EN_R |= GPIOIEN;
/* 2. Sets PIN_1 as output.*/
GPIOI_MODE_R |= (1U << 2);
GPIOI_MODE_R &=~(1U << 3);
while(1)
{
/* 3. Sets PIN_1 high */
GPIOI_OD_R |= LED_PIN;
}
}
I'm passing enumeration constants as bit-flags to a function that expects the enumeration type as input, like this:
// Enumeration type
typedef enum
{
LED_RED = (1 << 0),
LED_GREEN = (1 << 1),
LED_YELLOW = (1 << 2),
LED_ORANGE = (1 << 3),
} LedType;
...
// Function declaration
void setOnLed(LedType led);
...
// Function call
setOnLed(LED_RED | LED_GREEN | LED_YELLOW);
When I call the function, I get a warning:
warning: #188-D: enumerated type mixed with another type
The warning is because LED_RED | LED_GREEN | LED_YELLOW is converted to an integer and is not a LedType.
I could avoid the warning by adding LED combinations to the LedType enumeration but that means I have to add all possible combinations... and if I add more LED options to the enum, it will become really messy...
I could use arrays as an input to the function but that will require more code when calling the function, I prefer a simple function call to set LED's.
I am programming an ARM based micro-controller (STM32) using Keil µVision IDE.
My question
Is there a simple safe way to avoid this warning or another way to encapsulate all the LED's in one meaningful type/object so I can easily pass them to a function and process them in a loop?
The full story
I'm writing a program for an ARM based MCU which is connected to several LED's. In many places in the program, we will torn on/off, toggle and blink different combinations of the LED's. To make this clean and simple, I want to write several functions that get as input any combination of the LED's and do the same operations on all.
I created a struct named LedConfig with the hardware configurations of a LED and an array of LedConfig that contains the configuration of each LED:
typedef struct
{
// Hardware configurations of a LED
...
} LedConfig;
...
LedConfig LedArry[LEDS_LED_COUNT] =
{
[0] = { /* Red LED config */ },
[1] = { /* Green LED config */ },
[2] = { /* Yellow LED config */ },
[3] = { /* Orange LED config */ }
};
Now, I would like a simple way to pass several LED's to a function and process them in a loop.
I created a number of bit flags for each LED:
// Number of LED's defined in the system
#define LED_COUNT 4
// LED flags, for usage in LED's function
#define LED_RED (1 << 0)
#define LED_GREEN (1 << 1)
#define LED_YELLOW (1 << 2)
#define LED_ORANGE (1 << 3)
Defined a function:
void setOnLed(uint32_t led)
{
uint32_t bitMask = 1;
for(int i = 0; i < LED_COUNT; i++)
{
if(led & bitMask)
{
LedConfig* ledConfig = &LedArry[i];
// Turn on LED ...
}
bitMask <<= 1;
}
}
Now I can pass the LED's to the function with bitwise or operation:
setOnLed(LED_RED | LED_GREEN | LED_YELLOW);
This works fine but...
I would prefer to use an enum instead of defines for LED flags, in order to encapsulate theme in one meaningful type/object.
I replaced the defines with an enumeration:
typedef enum
{
LED_RED = (1 << 0),
LED_GREEN = (1 << 1),
LED_YELLOW = (1 << 2),
LED_ORANGE = (1 << 3),
} LedType;
And modified setOnLed function input to get the enumeration type:
void setOnLed(LedType led)
{
// ...
}
When I call the function with several LED's:
setOnLed(LED_RED | LED_GREEN | LED_YELLOW);
I get the warning:
warning: #188-D: enumerated type mixed with another type
Note: uint32_t is from stdint.h and is an unsigned 32-bit integer.
I would prefer to use an enum instead of defines for LED flags because I prefer to encapsulate theme in one meaningful type/object.
This is perfectly fine but keep two things in mind:
The type of an enumeration constant, LED_RED in your case, is always of type int which is signed.
The type of an enumerated type, LedType in your case is implementation-defined. The compiler can pick a smaller integer type if the values used would fit inside one.
Generally, you'll want to avoid signed types in embedded systems, because of integer promotion and various hiccups with bitwise operators.
One such hiccup is left-shifting a signed integer constant 1. This is of type int and signed, so on a 32 bit system, 1 << 31 would mean an undefined behavior bug. Therefore, always unsign suffix your integer constants: always use 1u << n instead of 1 << n.
I get the warning: warning: #188-D: enumerated type mixed with another type
Yeah because the function expects a uint32_t but you pass a int, since all operands in the expression LED_RED | LED_GREEN | LED_YELLOW are int - they are enumeration constants as described above. You should rewrite the function to take LedType as parameter instead.
Example:
// led.h
typedef enum
{
LED_NONE = 0u,
LED_RED = 1u << 0,
LED_GREEN = 1u << 1,
LED_YELLOW = 1u << 2,
LED_ORANGE = 1u << 3,
LED_ALL = LED_RED | LED_GREEN | LED_YELLOW | LED_ORANGE;
} led_t;
#define LED_PORT PORTX
void set_led (led_t leds);
// led.c
#include "led.h"
void set_led (led_t leds)
{
// this assuming you'll want to use the function both to set and clear leds
uint32_t led_port = LED_PORT;
led_port &= (uint32_t) ~LED_ALL;
led_port |= (uint32_t) leds;
LED_PORT = (uint32_t) leds;
}
The (uint32_t) casts are strictly speaking not necessary but will sate pedantic compilers and MISRA-C checkers.
I am reviewing the open source AMD GPU drivers for Linux. I noticed something I haven't seen before, and I would like to know the purpose. On line 1441 of the sid.h file, there are a series of defines that are integers being bitshifted left by 0. Wouldn't this just result in the original integer being operated on?
Here is an excerpt and a link to the head
#define VGT_EVENT_INITIATOR 0xA2A4
#define SAMPLE_STREAMOUTSTATS1 (1 << 0)
#define SAMPLE_STREAMOUTSTATS2 (2 << 0)
#define SAMPLE_STREAMOUTSTATS3 (3 << 0)
https://github.com/torvalds/linux/blob/master/drivers/gpu/drm/amd/amdgpu/sid.h#L1441
Also, I am learning to access the performance counter registers of AMD GPUs in order to calculate the GPU load. Any tips on that would be appreciated as well.
Things like that could be done just for the sake of consistency (not necessarily applicable to your specific case). For example, I can describe a set of single-bit flags as
#define FLAG_1 0x01
#define FLAG_2 0x02
#define FLAG_3 0x04
#define FLAG_4 0x08
or as
#define FLAG_1 (1u << 0)
#define FLAG_2 (1u << 1)
#define FLAG_3 (1u << 2)
#define FLAG_4 (1u << 3)
In the first line of the latter approach I did not have to shift by 0. But it just looks more consistent that way and emphasizes the fact that FLAG_1 has the same nature as the rest of the flags. And 0 acts as a placeholder for a different value, if I some day decide to change it.
You can actually see exactly that in the linked code with shift by 0 in the definitions of DYN_OR_EN and DYN_RR_EN macros.
The approach can be extended to multi-bit fields within a word, like in the following (contrived) example
// Bits 0-3 - lower counter, bits 4-7 - upper counter
#define LOWER_0 (0u << 0)
#define LOWER_1 (1u << 0)
#define LOWER_2 (2u << 0)
#define LOWER_3 (3u << 0)
#define UPPER_0 (0u << 4)
#define UPPER_1 (1u << 4)
#define UPPER_2 (2u << 4)
#define UPPER_3 (3u << 4)
unsigned packed_counters = LOWER_2 + UPPER_3; /* or `LOWER_2 | UPPER_3` */
Again, shifts by 0 bits are present purely for visual consistency. As well as shifts of 0 values.
You can actually see exactly that in the linked code with shift by 0 in the definitions of LC_XMIT_N_FTS and LC_XMIT_N_FTS_MASK macros.
I have 11 flags defined as:
#define F1 1
#define F2 2
#define F3 3
#define F4 4
...
#define F11 11
In some function I then create an integer which can include either of those flags, for example:
int a = (1 << F1) | (1 << F5) | (1 << F11) | (1 << F8);
This then gets passed into a function which needs to decode which flags are set in order to set specific bits in specific registers. So my question is, what is the most efficient way to check which flags are set. Right now I have 11 if's like:
void foo(int a)
{
if ((a & (1 << F1)) >> F1) {
// Set bit 5 in register A.
}
if ((a & (1 << F2)) >> F2) {
// Set bit 3 in register V.
}
if ((a & (1 << F3)) >> F3) {
// Set bit 2 in register H.
}
if ((a & (1 << F4)) >> F4) {
// Set bit 1 in register V.
}
// And so on, for all 11 flags.
}
P.S.
This is for an 8-bit microcontroller.
Just use:
typedef enum{
FLAG1 = 1, // or 0x01
FLAG2 = 2,
FLAG3 = 4,
...
FLAG8 = 0x80
} flags;
Then in main just check
if(value & FLAGN)
In C there is no diffrence between 1 and any other number in if statement. It just check if is zero or non-zero number.
And setting is the same:
value = FLAG1 | FLAG2 | FLAG8;
You can also use defines ofc.
And for claryfication, max number of flags for N bit type is N. So you need to have larger type (if compiller supports bigger datatypes) like uint16_t.
C's if statement and logical operators do not make a difference between 1 and other non-zeros (although logical operators produce 1 for true). Therefore, there is no difference between (a & (1 << F3)) >> F3 and a & (1 << F3) in the context of a logical expression: if one evaluates to true, so does the other one, and vice versa. Hence, this should work:
if (a & (1 << F1)) {
// Set bit 5 in register A.
}
Note: I assume you didn't mean to write #define F11 1024, but rather #define F11 10, because you use your Fs as the second operand of <<.
I'm facing some issues with this macro:
#define SHOW(val) PORTB = ((PORTB & 0xFF^OUT_PINS) | ((val) & OUT_PINS));
Let's say I have (defined earlier)
#define OUT_PINS 0b00011110
and PORTB has some values on other bits that I want to preserve.
The macro was intended to apply val to PORTB (OUT_PINS only) and leave the rest alone.
However, I'm just getting 1's on all output pins.
What's wrong with my code?
Okay so this was a silly mistake.
#define SEG_DOT _BV(PB1)
#define SEG_DIAG1 _BV(PB2)
#define SEG_DIAG2 _BV(PB3)
#define SEG_HORIZ _BV(PB4)
#define BUTTON _BV(PB0)
#define OUT_PINS SEG_DOT | SEG_DIAG1 | SEG_DIAG2 | SEG_HORIZ
#define IN_PINS BUTTON
#define BTN() (PINB & BUTTON == 0)
#define SHOW(val) PORTB = ((PORTB & ~OUT_PINS) | ((val) & OUT_PINS));
As you can see the OUT_PINS macro does not have brackets around it, so when it's expanded in the SHOW macro, it all becomes a huge nonsense.
Two possible fixes:
#define OUT_PINS (SEG_DOT | SEG_DIAG1 | SEG_DIAG2 | SEG_HORIZ)
OR
#define SHOW(val) PORTB = ((PORTB & ~(OUT_PINS)) | ((val) & (OUT_PINS)));
I like the first fix better, because the second looks very Lispy. Though, why not use both, after all.