How is this #define supposed to be used? - c

I found the following #define in a header file for an NXP processor:
/*! #name GLOBAL - LPUART Global Register */
/*! #{ */
#define LPUART_GLOBAL_RST_MASK (0x2U)
#define LPUART_GLOBAL_RST_SHIFT (1U)
/*! RST - Software Reset
* 0b0..Module is not reset.
* 0b1..Module is reset.
*/
#define LPUART_GLOBAL_RST(x) (((uint32_t)(((uint32_t)(x)) << LPUART_GLOBAL_RST_SHIFT)) & LPUART_GLOBAL_RST_MASK)
and I wonder how it is supposed to be used. Any explanation will be greatly appreciated.
M'

To clarify how NXP sets up its defines in the SDK:
Pretty much every register gets a "set", a bitmask, and a shift define for every field in the register. The shift isn't really useful generally, but it's used in their "set" define.
If you just want to set (or clear) a field in a register, it's sufficient to just OR/NAND the mask into the register eg
LPUART0->GLOBAL |= LPUART_GLOBAL_RST_MASK; // set
LPUART0->GLOBAL &= ~LPUART_GLOBAL_RST_MASK; // clear
Sometimes, it's more useful to use the "set" macro, typically when the data encompasses more than one bit. It's "safer" in that it won't let you set bits that you aren't supposed to.
LPUART0->GLOBAL |= LPUART_GLOBAL_RST(1); // Sets the value of the reset field to 1.
LPUART0->GLOBAL |= LPUART_GLOBAL_RST(0xFF); // Still just sets the value of the reset field to 1 and doesn't break anything else.
It's also useful to use the full set macro when you're setting all bits of a register at once. That way, you can explicitly set all bits to 1 or zero.
For example, here's some code that sets up sets up my ADC0 CFG1 register on a KL27:
ADC0->CFG1 = ADC_CFG1_ADLPC( 0 ) |
ADC_CFG1_ADIV( 3 ) |
ADC_CFG1_ADLSMP_MASK |
ADC_CFG1_MODE( 1 ) |
ADC_CFG1_ADICLK( 0 );
Without needing to think about things, I'm setting ADIV to 3 (both bits set), ADLSMP to 1, and the mode to 1, while explicitly setting all other bits to 0. Note that the "set" macros whose value is 0 do nothing, and are merely shown for better readability.

Related

Setting priority of interrupt in NVIC_PR register

I believe I've understood the concept of interrupt and how to initialize it, but I've seen in various places where they would first AND (select) the NVIC_PR registers against F bits before (ORing) the priority bits. For example in TM4C123 GPIO Port Interrupt Programming there was this line towards the end:
NVIC_PRI7_R = (NVIC_PRI7_R & 0xFF00FFFF) | 0x00A00000
The purpose is to set Port F interrupt a priority of 5 (by setting the top 3 bits [23:21] to the value; hence .1010. or 0xA to represent the a value of 5). So, why can't I just do this instead?
NVIC_PRI7_R |= 0x00A00000
What is & 0xFF00FFFF doing here? Why do I want to clear bits [23:16] before OR-ing priority bits [23:21]? The bits between [20:16] are not used anyway.
If bits [23:21] were all already set to 1, then NVIC_PRI7_R |= 0x00A00000 does nothing, it doesn't set them to the value you want, they all stay as 1s, that's why you clear them before applying your mask. The & with 0xFF00FFFF ensures that only the bits [23:16] are modified, the rest are left as they were.

Code that hugs commands between save and retrieve mechanism

Recently I am implementing the internal peripherals of STM32F0 family mcu in a c library.
While I was looking at how ST managed to do the same with their HAL library I stepped into this piece of code...
1 /*--------------------- EXTI Mode Configuration ------------------------*/
2 /* Configure the External Interrupt or event for the current IO */
3 if((GPIO_Init->Mode & EXTI_MODE) == EXTI_MODE)
4 {
5 /* Enable SYSCFG Clock */
6 __HAL_RCC_SYSCFG_CLK_ENABLE();
7 temp = SYSCFG->EXTICR[position >> 2];
8 CLEAR_BIT(temp, (0x0FU) << (4U * (position & 0x03U)));
9 SET_BIT(temp, (GPIO_GET_INDEX(GPIOx)) << (4U * (position & 0x03U)));
10 SYSCFG->EXTICR[position >> 2] = temp;
11 /* Clear EXTI line configuration */
12 temp = EXTI->IMR;
13 CLEAR_BIT(temp, (uint32_t)iocurrent);
14 if((GPIO_Init->Mode & GPIO_MODE_IT) == GPIO_MODE_IT)
15 {
16 SET_BIT(temp, iocurrent);
17 }
18 EXTI->IMR = temp;
What I am trying to understand is why the pair commands in lines 7-10 and 12-18 are used. Why are there? Why store something then write it and then retrieve it from memory discarted wroten content?
Is it about multitasking and race conditions? I do not fully understand!
This is used for speed optimization and to avoid unwanted states in memory-mapped registers.
Imagine some example memory-mapped variable EXT->ABC. Imagine such exaggerated example situation:
We have some hardware memory address that is accessible via memory-mapped EXT->ABC variable
We know that bit 1 is set and bit 2 is cleared in EXT->ABC
We want to clear bit 1 and set bit 2 and leave all other bits unchanged in EXT->ABC
Hardware doesn't allow bit 1 and bit 2 in EXT->ABC register to be both cleared or both set. Such states are forbidden and may result in undefined behavior (that means in ANY behavior, for example software reset).
Reading and writing to EXT->ABC register is very very slow.
If we operate on EXT->ABC variable directly:
CLEAR_BIT(EXT->ABC, 1); // expands to: EXT->ABC = EXT->ABC & ~1;
SET_BIT(EXT->ABC, 2); // expands to: EXT->ABC = EXT->ABC | 2;
this will result in a forbidden state in between the two lines, as bit 1 and bit 2 will be set. Also, in these 2 lines, EXTI->IMR is accessed 4 times.
To resolve this, we need to store the value in EXTI->IMR register somewhere, modify needed bits, and then write the modified value to EXTI->IMR register. So we will:
uint32_t temp;
temp = EXT->ABC; // store
CLEAR_BIT(temp, 1);
SET_BIT(temp, 2);
EXT->ABC = temp; // write
We read once from a memory-mapped variable, modify&play with the value, do whatever we want. After that, we write once to memory-mapped variable. That way EXT->ABC register is touched only twice and code is written safely, so that no undefined/forbidden/unwanted states occur.
For example: as for lines 7-10: to access SYSCFG->EXTICR[position >> 2U] you need multiple cpu operations (calculate position >> 2U, multiple by sizeof(SYSCFG->EXTICR), add to SYSCFG->EXTICR address, dereference). Maybe it was developer intention to execute these lines faster. Or you need to clear bits 0x0F<<(4*(position&0x03) when writting to (GPIO_GET_INDEX(GPIOx))<<(4*position&0x03).
It is to make the operations "atomic". All changes to the registers in one write operation.
I can't see anything strange:
Lines 7-10 and 12-18 copy what seems to be the contents of a register to a local variable, do some operation on it and then store the result back to the original register.
The reason why this may happen are multiple, but my guess is that the author of the code did not want the MCU to be in an intermediate state while tinkering with the registers.
For example, what happens to the core after line 8 but before line 9?

How does this flush USART c code work?

So I have old code I am looking at, that I am supposed to update for a new micro controller. In the old code there is a function to flush the USART in case there is junk on it from the start up. The code is bellow:
#define RXC 7
#define RX_COMPLETE (1<<RXC)
void UART1_FLUSH(void){
unsigned char dummy;
while ( UCSR1A & RX_COMPLETE ) dummy = UDR1;
}
Now from what I understand the while loop will keep going as long as there is something to read from the USART from register UDR1 that is why it is being stored in dummy since we do not need it. Now what I need help being explained to me is why the while loop works the way that it does?
Looking for UCSRnA in http://upcommons.upc.edu/pfc/bitstream/2099.1/10997/4/Annex3.pdf that code simply waits until bit 7 ("RXCn: USART Receive Complete") in USCR1A is off.
That document says about bit 7 This flag bit is set when there are unread data in the receive buffer and cleared when the receive buffer is empty.
(1<<RXC) is the numerical value of bit 7. A bitwise AND (the &) between it and the value read from UCSR1A results in 0 (if the bit is off) or (1<<RXC) (if the bit is on). Since (1<<7) is 128 and that is not zero, the loop will be entered when the bit is set.

Explain the C code instruction

The code below is used for programming microcontrollers. I want to know what the code below is doing. I know that '|' is OR and '&' AND but what is the whole line doing?
lcd_port = (((dat >> 4) & 0x0F)|LCD_EN|LCD_RS);
It's hard to put into context since we don't know what dat contains, but we can see that:
The data is right-shifted by 4 bits, so 11111111 becomes 00001111, for instance.
That value is AND'ed with 0x0F. This is a common trick to remove unwanted bits, since b & 1 = 1 and b & 0 = 0. Think of your number as a sequence of bits, here's a 2-byte example :
0011010100111010
&
0000000000001111
0000000000001010
Now the LCD_EN and LCD_RS flags are OR'ed. Again, this is a common binary trick, since b | 1 = 1 and b | 0 = b, so you can add flag but not remove them. So, if say LCD_EN = 0x01 and LCD_RS = 0x02,
0000000000001010
|
0000000000000011
0000000000001011
Hope that's clearer for you.
Some guesses, as you'll probably need to find chip datasheets to confirm this:-
lcd_port is probably a variable that directly maps to a piece of memory-mapped hardware - likely an alphanumeric LCD display.
The display probably takes data as four-bit 'nibbles' (hence the shift/and operations) and the higher four bits of the port are control signals.
LCD_EN is probably an abbreviation for LCD ENABLE - a control line used on the port.
LCD_RS is probably an abbreviation for LCD READ STROBE (or LCD REGISTER SELECT) - another control line used on the port. Setting these bits while writing to the port probably tells the port the kind of operation to perform.
I wouldn't be at all surprised if the hardware in use was a Hitachi HD44780 or some derivative.
It appears to be setting some data and flags on the lcd_port. The first part applies the mask 0x0F to (dat >> 4) (shift dat right 4) which is followed by applying the LCD_EN flag and then LCD_RS flag.
It is shifting the variable data four bits to the right, then masking the value with the value 15. This results in a value ranging from 0-15 (four left-most bits). This result is binary ORd with the LCD_EN and LCD_RS flags.
This code is shifting the bits of dat 4 bits to the right and then using & 0x0F to ensure it gets only those 4 least significant bits. It's then using OR to find which bits exist in that value OR LCD_EN OR LCD_RS and assigning that value to lcd_port.

How to get value from 15th pin of 32bit port in ARM?

I am using an IC, DS1620 to read 1 bit serial data coming on a single line. I need to read this data using one of the ports of ARM microcontroller (LPC2378). ARM ports are 32 bit. How do I get this value into a 1 bit variable?
Edit: In other words I need direct reference to a port pin.
there are no 1 bit variables, but you could isolate a particular bit for example:
uint32_t original_value = whatever();
uint32_t bit15 = (original_value >> 15) & 1; /*bit15 now contains either a 1 or a 0 representing the 15th bit */
Note: I don't know if you were counting bit numbers starting at 0 or 1, so the >> 15 may be off by one, but you get the idea.
The other option is to use bit fields, but that gets messy and IMO is not worth it unless every bit in the value is useful in some way. If you just want one or two bits, shifting and masking is the way to go.
Overall, this article may be of use to you.
For your CPU the answer from Evan Teran should be used. I just wanted to mention the bit-band feature of some other ARM CPU's like the Cortex-M3. For some region of RAM/Peripherals all bits are mapped to a separate address for easy access.
See http://infocenter.arm.com/help/index.jsp?topic=/com.arm.doc.ddi0337e/Behcjiic.html for more information.
It's simple if you can access the port register directly (I don't have any experience with ARM), just bitwise AND it with the binary mask that corresponds to the bit you want:
var = (PORT_REGISTER & 0x00008000);
Now var contains either 0 if the 15th bit is '0' or 0x00008000 if the 15th bit is '1'.
Also, you can shift it if you want to have either '0' or '1':
var = ((PORT_REGISTER & 0x00008000) >> 15);
The header file(s) which come with your compiler will contains declarations for all of the microcontroller's registers, and the bits in those registers.
In this specific article, let's pretend that the port input register is called PORTA, and the bit you want has a mask defined for it called PORTA15.
Then to read the state of that pin:
PinIsSet = (PORTA & PORTA15) == PORTA15;
Or equivalently, using the ternary operator:
PinIsSet = (PORTA & PORTA15) ? 1 : 0;
As a general point, refer to the reference manual for what all the registers and bits do. Also, look at some examples. (This page on the Keil website contains both, and there are plenty of other examples on the web.)
In LPC2378 ( as the other LPC2xxxx microcontroller family ), I/O ports are in system memory, so you need to declare some variables like this:
#define DALLAS_PIN (*(volatile unsigned long int *)(0xE0028000)) /* Port 0 data register */
#define DALLAS_DDR (*(volatile unsigned long int *)(0xE0028008)) /* Port 0 data direction reg */
#define DALLAS_PIN (1<<15)
Please note that 0xE0028000 is the address for the data register of port0, and 0xE0028008 is the data direction register address for port0. You need to modify this according to the port and bit used in your app.
After that, in your code function, the code or macros for write 1, write 0 and read must be something like this:
#define set_dqout() (DALLAS_DDR&=~DALLAS_PIN) /* Let the pull-up force one, putting I/O pin in input mode */
#define reset_dqout() (DALLAS_DDR|=DALLAS_PIN,DALLAS_PORT&=~DALLAS_PIN) /* force zero putting the I/O in output mode and writing zero on it */
#define read_dqin() (DALLAS_DDR&=~DALLAS_PIN,((DALLAS_PORT & DALLAS_PIN)!= 0)) /* put i/o in input mode and test the state of the i/o pin */
I hope this can help.
Regards!
If testing for bits, it is good to keep in mind the operator evaluation order of C, e.g.
if( port & 0x80 && whatever() )
can result in unexpected behaviour, as you just wrote
if( port & (0x80 && whatever()) )
but probably ment
if( (port & 0x80) && whatever() )

Resources