Preamble: after working a couple of years as application developer, the world of the software engineering became more obscure than it was before. The reason is that the real stuff is hidden under zillions layers of abstractions: OS, frameworks, etc. The young generation is deprived of the pleasure of working with PDP-like machines where all programming was done via electrical switch toggling. Another problem is the ephemeral nature of modern programming languages. Once there was Python 2.x, now it is deprecated and there is Python 3.x which in its turn will be deprecated in a couple of months. Idem for other languages. ANSI C looks like the Pyramid of Cheops: it was there in 70's and I don't doubt it will be there after the Sun will become a red dwarf.
It seems that now the only way to understand the interaction between the hardware and the software is to play with embedded development. From the pedagogical point of view physical chips are very handy because they allow to tackle the most difficult part of C language, namely pointers. When coding in OS environment, */& notation is still very confusing because it refers to some location somewhere inside of the virtual memory. And before you will got the understanding of what is the virtual memory, you have to read a couple of monographs about OS development, etc. You may find it stupid but I do really want to know which transistor is holding my bit right now. At least, I can wire physical pin voltage to programming abstractions.
Currently I am working with Atmel chips and WinAVR package because of numerous textbooks and accessible hardware. Though all books promise to teach AVR coding using plain C, the reality is that all pointers are hidden behind macros like PORTA, DDRB, etc. All code examples include header file 'io.h' which in its turn refers to other header files specific for a given chip like 'iomx8.h'. So far, I cannot find any macros definition in these headers. The code to increase the voltage on the physical pin 14 on Atmega168 looks like
DDRB = 0x01;
PORTB = 0x01;
Fortunately, Microchip site provides some basic documents where it is stated, for example, that if I want to rise the voltage on the physical pin 14, I need to follow these steps:
unsigned char *ddrB;
ddrB = (unsigned char*)0x24; // the address of ddrB is 0x24
*ddrB |= 0x01; // set up low impedance/ high current state for the transistor 0
unsigned char *portB;
portB = (unsigned char*)0x25;
*portB |= 0x01; // voltage on
*portB &= ~(0x01); // voltage off
Unfortunately, this is the only info I got after one week of the lurking. Now I am going through USART programming and the things become more complicated with all these UBRR0H, UCSR0C. Since provided header files don't contain macros definitions for any register, where else can I find it?
A similar question was asked several years ago: accessing AVR registers with C?. However, provided answers were somewhat useless, besides the clue that GCC itself can map some mythical PORTB to real physical locations. Could someone describe the mechanism behind the mapping?
From a memory-mapping standpoint: The general purpose registers, special function+I/O registers, and SRAM share non-overlapping ranges a single address space, as described in datasheets for various processors in the AVR series. All of your pointers will reference this memory space, unless annotated as pointers to PROGMEM (which will cause different instructions to be emitted). The reference will be made without any sort of virtual memory mapping.
For example, the ATtiny 25/45/85 has the following map shown on page 18:
Your linker is aware of this memory map and will place variables accordingly. For example, a global variable declared in one of your compilation units will end up in an address above 0x0060 in the example device described above, so that it ends up in the SRAM.
From an instruction encoding standpoint: Although there is one address space, there is special functionality reserved for certain important regions. For example, IN and OUT instructions have six bits in their instruction encoding which can be used to directly refer to one of the 64 addresses within [0x20, 0x5F).
The IN and OUT instructions are unique in their ability to load and store to a fixed address encoded directly in the instruction, since the normal load and store instructions require an indirect load with the 'Z' register being loaded first.
As a result, when the compiler sees memory operations to a fixed I/O register, it may generate these more efficient instructions. However, a normal load/store via a pointer will have the same effect (although with different numbers of clock cycles required). For extended I/O registers that didn't fit into the first 64 (e.g. OSCCAL on an atmega328p), normal load/store instructions will always be generated.
Short answer - hidden away in the included headers from Atmel are a collection of macros that create pointers to the register locations. If you want to see any of the source, as well as additional necessary headers like interrupt.h, they are in WinAVR-20100110/avr/include/
Here's a brief overview of the process:
Your Makefile defines the device to be used, and then passes it has a definition to the compiler.
DEVICE = atmega2560
...
-D__$(DEVICE)__
You then include io.h, which automatically includes the necessary headers based on your device:
// In main source file
#include <io.h>
// In io.h
#include <avr/sfr_defs.h>
// ...
#elif defined (__AVR_ATmega2560__)
# include <avr/iom2560.h>
// In sfr_defs.h
#define _MMIO_BYTE(mem_addr) (*(volatile uint8_t *)(mem_addr))
#define __SFR_OFFSET 0x20
#define _SFR_IO8(io_addr) _MMIO_BYTE((io_addr) + __SFR_OFFSET)
// In iom2560.h
#include <avr/iomxx0_1.h>
// Other device specific definitions
// Om iomxx0_1.h
#define PINA _SFR_IO8(0X00)
// Other device family shared definitions
So if you unroll all of that, what you get is a volatile pointer to the register address. When ever you use PINA in your code, the preprocessor replaces it with all of the expanded macros:
PINA
_SFR_IO8(0X00)
_MMIO_BYTE((0X00) + __SFR_OFFSET)
(*(volatile uint8_t *)((0X00) + 0x20))
Which specifies that PINA is a pointer to a volatile 8-bit memory address of 0x20. The internal chip architecture then maps that address to the appropriate peripheral register whenever it is accessed.
Different devices have different register addresses and offsets. If you want to define your own, you'll need to check out the relevant datasheet. For most AVR chips, there is a section towards the end titled "Register Summary" that lists all of the register addresses and names of the individual control bits. In my experience (for AVR, at least), the names of the registers and bits found in the datasheet are exactly what they are defined as in the io.h files.
Also notice the use of "uint8_t" rather than "char." It's common (and highly encouraged) to use the bit-width specific definitions found in <stdint.h> to specify signed/unsigned and 8/16/32 bit variables whenever appropriate. Since AVR is 8-bit, any use of 16 or 32 bit (or float) variables will require multiple clock cycles for each operation. In this case, stdint.h should have:
typedef unsigned char uint8_t
Related
I'm trying to read the clock source value of a given generic clock generator on the Samd21 MCU.
The datasheet says, that if I wish to read the GENCTRL register (containing the clock source value), I need to "do a 8-bit write" and read the register afterwards. How can I do that, given that the register is 32-bit?
I'm afraid, that by doing the following, I am actually changing the generic clock generator X's configuration:
GCLK->GENCTRL.reg = GCLK->GENCTRL.reg & 0xFFFFFFF0 | 0x0000000X
Keep in mind, that the lower 8-bits of GENCTRL are reserved for the generic clock generator's ID.
Bellow is part of the datasheet containing the instructions for reading the GENCTRL register.
The ARM registers are 32 bit. The peripheral registers (in general) will be arranged at 4 byte offsets, but will not always implement all 32 bits that this implies.
This is most obvious when the upper bits of a peripheral register are 'read as zero, write ignored'. You might occasionally see a newer or more featured version of the peripheral where some of these unused bits become used in the future.
Depending on exactly how a specific peripheral is connected to the core it is generally possible to perform byte, half-word or word accesses to any region of memory. Provided this is supported, only the relevant bytes will be updated. Where there is a restriction (for example a 32 bit APB bus where only byte access is supported), this should be clearly identified in the documentation. With a AA64 processor, it is even possible to write two registers at once!
Do note that the peripheral 'knows' the access size (at least the information is present on the internal bus), so it is possible to specify different behaviour for a byte access as a word (even if this is the sort of confusing behaviour that is best avoided). To generalise, any memory mapped peripheral is more of an observer of the bus than a true implementation of memory - the designer is free to play tricks with the full address/data/control bus bit combinations, and implement bitmasks, read/modify/write, access locks, magic values, etc.
As a beginner C programmer, I am wondering, what would be the best easy-to-read and easy-to-understand solution for setting control bits in a device. Are there any standards? Any example code to mimic? Google didn't give any reliable answer.
For example, I have a control block map:
The first way I see would be to simply set the needed bits. It requires a bunch of explanations in comments and seems to be not all that professional.
DMA_base_ptr[DMA_CONTROL_OFFS] = 0b10001100;
The second way I see is to create a bit field. I'm not sure if this is the one should I stick to, since I never encountered it being used in such way (unlike the first option I mentioned).
struct DMA_control_block_struct
{
unsigned int BYTE:1;
unsigned int HW:1;
// etc
} DMA_control_block_struct;
Is one of the options better than the other one? Are there any options I just don't see?
Any advice would be highly appreciated
The problem with bit fields is that the C standard does not dictate that the order in which they are defined is the same as the order that they are implemented. So you may not be setting the bits you think you are.
Section 6.7.2.1p11 of the C standard states:
An implementation may allocate any addressable storage unit large
enough to hold a bit- field. If enough space remains, a bit-field
that immediately follows another bit-field in a structure shall be
packed into adjacent bits of the same unit. If insufficient space
remains, whether a bit-field that does not fit is put into
the next unit or overlaps adjacent units is
implementation-defined. The order of allocation of bit-fields within
a unit (high-order to low-order or low-order to high-order) is
implementation-defined. The alignment of the addressable storage
unit is unspecified.
As an example, look at the definition of struct iphdr, which represents an IP header, from the /usr/include/netinet/ip.h file file on Linux:
struct iphdr
{
#if __BYTE_ORDER == __LITTLE_ENDIAN
unsigned int ihl:4;
unsigned int version:4;
#elif __BYTE_ORDER == __BIG_ENDIAN
unsigned int version:4;
unsigned int ihl:4;
#else
# error "Please fix <bits/endian.h>"
#endif
u_int8_t tos;
...
You can see here that the bitfields are placed in a different order depending on the implementation. You also shouldn't use this specific check because this behavior is system dependent. It is acceptable for this file because it is part of the system. Other systems may implement this in different ways.
So don't use a bitfield.
The best way to do this is to set the required bits. However, it would make sense to define named constants for each bit and to perform a bitwise OR of the constants you want to set. For example:
const uint8_t BIT_BYTE = 0x1;
const uint8_t BIT_HW = 0x2;
const uint8_t BIT_WORD = 0x4;
const uint8_t BIT_GO = 0x8;
const uint8_t BIT_I_EN = 0x10;
const uint8_t BIT_REEN = 0x20;
const uint8_t BIT_WEEN = 0x40;
const uint8_t BIT_LEEN = 0x80;
DMA_base_ptr[DMA_CONTROL_OFFS] = BIT_LEEN | BIT_GO | BIT_WORD;
Other answers have already covered most of the stuff, but it might be worthwhile to mention that even if you can't use the non-standard 0b syntax, you can use shifts to move the 1 bit into position by bit number, i.e.:
#define DMA_BYTE (1U << 0)
#define DMA_HW (1U << 1)
#define DMA_WORD (1U << 2)
#define DMA_GO (1U << 3)
// …
Note how the last number matches the "bit number" column in the documentation.
The usage for setting and clearing bits doesn't change:
#define DMA_CONTROL_REG DMA_base_ptr[DMA_CONTROL_OFFS]
DMA_CONTROL_REG |= DMA_HW | DMA_WORD; // set HW and WORD
DMA_CONTROL_REG &= ~(DMA_BYTE | DMA_GO); // clear BYTE and GO
The old-school C way is to define a bunch of bits:
#define WORD 0x04
#define GO 0x08
#define I_EN 0x10
#define LEEN 0x80
Then your initialization becomes
DMA_base_ptr[DMA_CONTROL_OFFS] = WORD | GO | LEEN;
You can set individual bits using |:
DMA_base_ptr[DMA_CONTROL_OFFS] |= I_EN;
You can clear individual bits using & and ~:
DMA_base_ptr[DMA_CONTROL_OFFS] &= ~GO;
You can test individual bits using &:
if(DMA_base_ptr[DMA_CONTROL_OFFS] & WORD) ...
Definitely don't use bitfields, though. They have their uses, but not when an external specification defines that the bits are in certain places, as I assume is the case here.
See also questions 20.7 and 2.26 in the C FAQ list.
There is no standard for bit fields. Mapping and bit operation are dependent on compiler in this case. Binary values such as 0b0000 are not standardized also. Usual way to do is defining hexadecimal values for each bit. For example:
#define BYTE (0x01)
#define HW (0x02)
/*etc*/
When you want to set bits, you can use:
DMA_base_ptr[DMA_CONTROL_OFFS] |= HW;
Or you can clear bits with:
DMA_base_ptr[DMA_CONTROL_OFFS] &= ~HW;
Modern C compilers handle trivial inline functions just fine – without overhead. I’d make all of the abstractions functions, so that the user doesn’t need to manipulate any bits or integers, and is unlikely to abuse the implementation details.
You can of course use constants and not functions for implementation details, but the API should be functions. This also allows using macros instead of functions if you’re using an ancient compiler.
For example:
#include <stdbool.h>
#include <stdint.h>
typedef union DmaBase {
volatile uint8_t u8[32];
} DmaBase;
static inline DmaBase *const dma1__base(void) { return (void*)0x12340000; }
// instead of DMA_CONTROL_OFFS
static inline volatile uint8_t *dma_CONTROL(DmaBase *base) { return &(base->u8[12]); }
// instead of constants etc
static inline uint8_t dma__BYTE(void) { return 0x01; }
inline bool dma_BYTE(DmaBase *base) { return *dma_CONTROL(base) & dma__BYTE(); }
inline void dma_set_BYTE(DmaBase *base, bool val) {
if (val) *dma_CONTROL(base) |= dma__BYTE();
else *dma_CONTROL(base) &= ~dma__BYTE();
}
inline bool dma1_BYTE(void) { return dma_BYTE(dma1__base()); }
inline void dma1_set_BYTE(bool val) { dma_set_BYTE(dma1__base(), val); }
Such code should be machine generated: I use gsl (of 0mq fame) to generate those based on a template and some XML input listing the details of the registers.
You could use bit-fields, despite what all the fear-mongers here have been saying. You would just need to know how the compiler(s) and system ABI(s) you intend your code to work with define the "implementation defined" aspects of bit-fields. Don't be scared off by pedants putting words like "implementation defined" in bold.
However what others so far seem to have missed out on are the various aspects of how memory-mapped hardware devices might behave that can be counter-intuitive when dealing with a higher-level language like C and the optimization features such languages offer. For example every read or write of a hardware register may have side-effects sometimes even if bits are not changed on the write. Meanwhile the optimizer may make it difficult to tell when the generated code is actually reading or writing to the address of the register, and even when the C object describing the register is carefully qualified as volatile, great care is required to control when I/O occurs.
Perhaps you will need to use some specific technique defined by your compiler and system in order to properly manipulate memory-mapped hardware devices. This is the case for many embedded systems. In some cases compiler and system vendors will indeed use bit-fields, just as Linux does in some cases. I would suggest reading your compiler manual first.
The bit description table you quote appears to be for the control register of the the Intel Avalon DMA controller core. The "read/write/clear" column gives a hint as to how a particular bit behaves when it is read or written. The status register for that device has an example of a bit where writing a zero will clear a bit value, but it may not read back the same value as was written -- i.e. writing the register may have a side-effect in the device, depending on the value of the DONE bit. Interestingly they document the SOFTWARERESET bit as "RW", but then describe the procedure as writing a 1 to it twice to trigger the reset, and then they also warn Executing a DMA software reset when a DMA transfer is active may result in permanent bus lockup (until the next system reset). The SOFTWARERESET bit should therefore not be written except as a last resort. Managing a reset in C would take some careful coding no matter how you describe the register.
As for standards, well ISO/IEC have produced a "technical report" known as "ISO/IEC TR 18037", with the subtitle "Extensions to support embedded processors". It discusses a number of the issues related to using C to manage hardware addressing and device I/O, and specifically for the kinds of bit-mapped registers you mention in your question it documents a number of macros and techniques available through an include file they call <iohw.h>. If your compiler provides such a header file, then you might be able to use these macros.
There are draft copies of TR 18037 available, the latest being TR 18037(2007), though it provides for rather dry reading. However it does contain an example implementation of <iohw.h>.
Perhaps a good example of a real-world <iohw.h> implementation is in QNX. The QNX documentation offers a decent overview (and an example, though I would strongly suggest using enums for integer values, never macros): QNX <iohw.h>
You should make sure to initialize the bits to a known default value when you declare the variable to store their values. In C, when you declare a variable you are just reserving a block of memory at a address and the size of the block is based its type. If you don't initialize the variable you can encounter undefined / unexpected behavior since the value of the variable will be effected by whatever the value / state of the memory in that block was before you declared it. By initializing the variable to a default value, you are clearing this block of memory of its existing state and putting it in a know state.
As far as readability, you should use a bit field to store the values of the bit. A bit field enables you to store the values of the bits in a struct. This makes it easier to organize since you can use dot notation. Also, you should make sure to comment the declaration of the bit field to explain what the different fields are used for as a best practice. I hope this answers your question. Good luck with you C programming!
All the sample code I can find accesses ports like so:
GPIO_SetBits(GPIOE, GPIO_Pin_9 | GPIO_Pin_13);
Which looked OK at first, until I tried to reference LEDs (my port E) by index (0-7). A switch or a LUT is a solution, but I don't like either. Is it possible to declare e. g. uint8_t and map it to the specific range of pins of a certain port?
The standard Cortex-M3/M4 memory map allows for the CPU to have so-called "bit-band" aliases of regions, in which writing to each word in the bit-band alias performs an automatic read-modify-write to alter the target bit in the corresponding region.
Taking the STM32F411 (Cortex-M4F) manual I have to hand as an example, that shows the peripheral region 0x40000000-0x400FFFFF is covered by a bit-band alias in the region 0x42000000-0x43FFFFFF. Thus for instance on that device (if I've got my maths right) each word from 0x42420280-0x42420300 corresponds to a bit in the GPIO port E data register GPIOE_ODR at 0x40021014, so this:
volatile int *leds = (void *)0x42420280;
leds[x] = 1; /* only bit 0 of the bit-band word actually holds data */
makes the hardware perform the tidily-abstracted equivalent of this:
volatile int *leds = (void *)0x40021014;
int val = *leds;
val |= (1 << x);
*leds = val;
If you have a suitable device, don't want to update multiple bits at once, and don't mind the extra overhead vs. a single access to the regular register in some cases (e.g. writing the set/clear registers instead of the data), it's a pretty neat trick.
The values of the GPIO_Pin_X constants are defined as bit positions within a 16-bit value, with pin 0 as the least significant bit and pin 15 as the most significant:
#define GPIO_Pin_0 ((uint16_t)0x0001) /*!< Pin 0 selected */
#define GPIO_Pin_1 ((uint16_t)0x0002) /*!< Pin 1 selected */
#define GPIO_Pin_2 ((uint16_t)0x0004) /*!< Pin 2 selected */
[...]
#define GPIO_Pin_15 ((uint16_t)0x8000) /*!< Pin 15 selected */
(This is why you can OR them together to write to multiple pins!)
If you need to index pins dynamically, you can do so using a macro like:
#define GPIO_Pin(x) ((uint16_t) 1 << (x))
busses are fixed width you cannot change that hardware with software, and no you cannot break up a fixed sized register in a peripheral into smaller parts with software. You can possibly make it appear in software to do such a thing but in the end the real code generated by the high level code will still do read-modify-writes or whatever is needed or supported based on bus or peripheral register sized transactions.
Trying to make things smaller even making variables byte sized vs the native size of registers ultimately creates more or inefficient code. So trying to understand why you would want to do such a thing anyway.
An stm32 I recently played with has a bit clear/set register where writing a one clears or sets the output state of a pin. In the logic it simply clears or sets the flip flop controlling that port output, which in logic they can/do implement the backend of a control into smaller parts, but from software you cannot directly access those in non-aligned, not properly sized transfers. Some companies require you, in software, to do a read-modify-write. So using an stm32 with these kinds of features (other brands may also have similar features, sometimes using the address, sometimes like this but in separate registers one for setting one for clearing) actually makes your code about as simple as it can be.
You are more than welcome to create as many defines as you want for combinations of gpio pins so that in your main code you only need one mnemonic instead of multiples orred together if that is your desire.
You do not have to have a function for setting the bits if you can get the compiler to reliably generate a str of the right size, and the right combination if bits in a single define you can simply the code into an a = b; situation.
My guess from your vague question, is a LUT is the optimal solution.
I am working on a project that involves programming 32 bit ARM micro-controllers. As in many embedded software coding work, setting and clearing bits are essential and quite repetitive task. Masking strategy is useful when working with micros rather than 32 bits to set and clear bits. But when working with 32 bit micro-contollers, it is not really practical to write masks each time we need to set/clear a single bit.
Writing functions to handle this could be a solution; however having a function occupies memory which is not ideal in my case.
Is there any better alternative to handle bit setting/clearing when working with 32 bit micros?
In C or C++, you would typically define macros for bit masks and combine them as desired.
/* widget.h */
#define WIDGET_FOO 0x00000001u
#define WIDGET_BAR 0x00000002u
/* widget_driver.c */
static uint32_t *widget_control_register = (uint32_t*)0x12346578;
int widget_init (void) {
*widget_control_register |= WIDGET_FOO;
if (*widget_control_register & WIDGET_BAR) log(LOG_DEBUG, "widget: bar is set");
}
If you want to define the bit masks from the bit positions rather than as absolute values, define constants based on a shift operation (if your compiler doesn't optimize these constants, it's hopeless).
#define WIDGET_FOO (1u << 0)
#define WIDGET_BAR (1u << 1)
You can define macros to set bits:
/* widget.h */
#define WIDGET_CONTROL_REGISTER_ADDRESS ((uint32_t*)0x12346578)
#define SET_WIDGET_BITS(m) (*WIDGET_CONTROL_REGISTER_ADDRESS |= (m))
#define CLEAR_WIDGET_BITS(m) (*WIDGET_CONTROL_REGISTER_ADDRESS &= ~(uint32_t)(m))
You can define functions rather than macros. This has the advantage of added type verifications during compilations. If you declare the function as static inline (or even just static) in a header, a good compiler will inline the function everywhere, so using a function in your source code won't cost any code memory (assuming that the generated code for the function body is smaller than a function call, which should be the case for a function that merely sets some bits in a register).
/* widget.h */
#define WIDGET_CONTROL_REGISTER_ADDRESS ((uint32_t*)0x12346578)
static inline void set_widget_bits(uint32_t m) {
*WIDGET_CONTROL_REGISTER_ADDRESS |= m;
}
static inline void set_widget_bits(uint32_t m) {
*WIDGET_CONTROL_REGISTER_ADDRESS &= ~m;
}
The other common idiom for registers providing access to individual bits or groups of bits is to define a struct containing bitfields for each register of your device. This can get tricky, and it is dependent on the C compiler implementation. But it can also be clearer than macros.
A simple device with a one-byte data register, a control register, and a status register could look like this:
typedef struct {
unsigned char data;
unsigned char txrdy:1;
unsigned char rxrdy:1;
unsigned char reserved:2;
unsigned char mode:4;
} COMCHANNEL;
#define CHANNEL_A (*(COMCHANNEL *)0x10000100)
// ...
void sendbyte(unsigned char b) {
while (!CHANNEL_A.txrdy) /*spin*/;
CHANNEL_A.data = b;
}
unsigned char readbyte(void) {
while (!CHANNEL_A.rxrdy) /*spin*/;
return CHANNEL_A.data;
}
Access to the mode field is just CHANNEL_A.mode = 3;, which is a lot clearer than writing something like *CHANNEL_A_MODE = (*CHANNEL_A_MODE &~ CHANNEL_A_MODE_MASK) | (3 << CHANNEL_A_MODE_SHIFT);. Of course, the latter ugly expression would usually be (mostly) covered over by macros.
In my experience, once you established a style for describing your peripheral registers you are best served by following that style over the whole project. The consistency will have huge benefits for future code maintenance, and over the lifetime of a project that factor likely is more important that the relatively small detail of whether you adopted the struct and bitfields or macro style.
If you are coding for a target which has already set a style in its manufacturer provided header files and customary compiler toolchain, adopting that style for your own custom hardware and low level code may be best as it will provide the best match between manufacturer documentation and your coding style.
But if you have the luxury of establishing the style for your development at the outset, your compiler platform is well enough documented to permit you to reliably describe device registers with bitfields, and you expect to use the same compiler for the lifetime of the product, then that is often a good way to go.
You can actually have it both ways too. It isn't that unusual to wrap the bitfield declarations inside a union that describes the physical registers, allowing their values to be easily manipulated all bits at once. (I know I've seen a variation of this where conditional compilation was used to provide two versions of the bitfields, one for each bit order, and a common header file used toolchain-specific definitions to decide which to select.)
typedef struct {
unsigned char data;
union {
struct {
unsigned char txrdy:1;
unsigned char rxrdy:1;
unsigned char reserved:2;
unsigned char mode:4;
} bits;
unsigned char status;
};
} COMCHANNEL;
// ...
#define CHANNEL_A_MODE_TXRDY 0x01
#define CHANNEL_A_MODE_TXRDY 0x02
#define CHANNEL_A_MODE_MASK 0xf0
#define CHANNEL_A_MODE_SHIFT 4
// ...
#define CHANNEL_A (*(COMCHANNEL *)0x10000100)
// ...
void sendbyte(unsigned char b) {
while (!CHANNEL_A.bits.txrdy) /*spin*/;
CHANNEL_A.data = b;
}
unsigned char readbyte(void) {
while (!CHANNEL_A.bits.rxrdy) /*spin*/;
return CHANNEL_A.data;
}
Assuming your compiler understands the anonymous union then you can simply refer to CHANNEL_A.status to get the whole byte, or CHANNEL_A.mode to refer to just the mode field.
There are some things to watch for if you go this route. First, you have to have a good understanding of structure packing as defined in your platform. The related issue is the order in which bit fields are allocated across their storage, which can vary. I've assumed that the low order bit is assigned first in my examples here.
There may also be hardware implementation issues to worry about. If a particular register must always be read and written 32 bits at a time, but you have it described as a bunch of small bit fields, the compiler might generate code that violates that rule and accesses only a single byte of the register. There is usually a trick available to prevent this, but it will be highly platform dependent. In this case, using macros with a fixed sized register will be less likely to cause a strange interaction with your hardware device.
These issues are very compiler vendor dependent. Even without changing compiler vendors, #pragma settings, command line options, or more likely optimization level choices can all affect memory layout, padding, and memory access patterns. As a side effect, they will likely lock your project to a single specific compiler toolchain, unless heroic efforts are used to create register definition header files that use conditional compilation to describe the registers differently for different compilers. And even then, you are probably well served to include at least one regression test that verifies your assumptions so that any upgrades to the toolchain (or well-intentioned tweaks to the optimization level) will cause any issues to get caught before they are mysterious bugs in code that "has worked for years".
The good news is that the sorts of deep embedded projects where this technique makes sense are already subject to a number of toolchain lock in forces, and this burden may not be a burden at all. Even if your product development team moves to a new compiler for the next product, it is often critical that firmware for a particular product be maintained with the very same toolchain over its lifetime.
If you use the Cortex M3 you can use bit-banding
Bit-banding maps a complete word of memory onto a single bit in the bit-band region. For example, writing to one of the alias words will set or clear the corresponding bit in the bitband region.
This allows every individual bit in the bit-banding region to be directly accessible from a word-aligned address using a single LDR instruction. It also allows individual bits to be toggled from C without performing a read-modify-write sequence of instructions.
If you have C++ available, and there's a decent compiler available, then something like QFlags is a good idea. It gives you a type-safe interface to bit flags.
It is likely to produce better code than using bitfields in structures, since the bitfields can only be changed one at a time and will likely translate to at least one load/modify/store per each changed bitfield. With a QFlags-like approach, you can get one load/modify/store per each or-assign or and-assign statement. Note that the use of QFlags doesn't require the inclusion of the entire Qt framework. It's a stand-alone header file (after minor tweaks).
At the driver level setting and clearing bits with masks is very common and sometimes the only way. Besides, it's an extremely quick operation; only a few instructions. It may be worthwhile to set up a function that can clear or set certain bits for readability and also reusability.
It's not clear what type of registers you are setting and clearing bits in but in general there are two cases you have to worry about in embedded systems:
Setting and clearing bits in a read/write register
If you want to change a single bit (or a handful of bits) in a read and write register you will first have to read the register, set or clear the appropriate bit using masks and whatever else to get the correct behavior, and then write back to the same register. That way you don't change the other bits.
Writing to separate Set and Clear registers (common in ARM micros) Sometimes there are separate Set and Clear registers. You can write just a single bit to a clear register and it will clear that bit. For instance, if there is a register you want to clear bit 9, just write (1<<9) to the clear register. You don't have to worry about modifying the other bits. Similar for the set register.
You can set and clear bits with a function that takes up as much memory as doing it with a mask:
#define SET_BIT(variableName, bitNumber) variableName |= (0x00000001<<(bitNumber));
#define CLR_BIT(variableName, bitNumber) variableName &= ~(0x00000001<<(bitNumber));
int myVariable = 12;
SET_BIT(myVariable, 0); // myVariable now equals 13
CLR_BIT(myVariable, 1); // myVariable now equals 11
These macros will produce exactly the same assembler instructions as a mask.
Alternatively, you could do this:
#define BIT(n) (0x00000001<<n)
#define NOT_BIT(n) ~(0x00000001<<n)
int myVariable = 12;
myVariable |= BIT(4); //myVariable now equals 28
myVariable &= NOT_BIT(3); //myVariable now equals 20
myVariable |= BIT(5) |
BIT(6) |
BIT(7) |
BIT(8); //myVariable now equals 500
I thought if I had an AVR 8 bit microcontroller the wordsize is 8 bits/1 byte? But in the datasheet it states that most AVR processors have a 16bit word. But it does not say that the specific processor has that. Weird to state something generally in a specific datasheet.
But what is the 8-bit, 32-bit MCU about if that is not the word size?
If wordsize = 2 byte then this is atomic in C right:
U16 Position;
Position = 1000;
But if the word is 1 byte I should disable interrupts (the interrupt uses this variable) when writing to this variable?
How slow is it to disable interrupts?
The traditional AVR family (i.e. ATtiny, ATmega, ATxmega, not the AVR32) are 8-bit MCUs working on 8-bit registers/accumulators, though there are a few 16-bit instructions such as when dealing with pointers through address pairs.
Unfortunately there is no universally accepted definition of what a "word" is. In this context I suspect that the author is simply referring to a 16-bit value as a word, as oppose to an 8-bit byte or a 32-bit double-word.
So, no, you cannot count on a 16-bit variable being accessed atomically. Thankfully some of the most important I/O registers, such as timers, where this matters have internal latches to hide the fact but you do need to be careful with RAM variables shared with interrupts.
Temporarily disabling interrupts is quite fast, a cycle each for the CLI/SEI instructions. One gotcha with certain compilers (ImageCraft comes to mind) is that using inline assembly like this in a function may disable optimizations so the actual cost can be somewhat higher. Consider disabling only the contentious interrupt in question to avoid this issue and to reduce latency.
Beware that unlike some other MCUs atomic bit access is normally restricted to a small subset of registers in the lowest I/O port range, typically a few PORTs and general-purpose registers.