How Delay mechanism works in embedded EFM32 - c

I cant see how the following code dmaking a delay?
We have SysTick iterrupt which i dont know what it means.
What is the meaning of SysTick_Config(CMU_ClockFreqGet(cmuClock_CORE)?
Thanks.
#include <stdint.h>
#include <stdbool.h>
#include "em_device.h"
#include "em_chip.h"
#include "em_cmu.h"
#include "em_emu.h"
#include "bsp.h"
#include "bsp_trace.h"
void SysTick_Handler(void)
{
msTicks++; /* increment counter necessary in Delay()*/
}
void Delay(uint32_t dlyTicks)
{
uint32_t curTicks;
curTicks = msTicks;
while ((msTicks - curTicks) < dlyTicks) ;
}
int main(void)
{
/* Chip errata */
CHIP_Init();
CMU_ClockEnable(cmuClock_GPIO,true);
/* Setup SysTick Timer for 1 msec interrupts */
if (SysTick_Config(CMU_ClockFreqGet(cmuClock_CORE) / 1000)) {
while (1) ;
}
}

The EFM32 is an ARM Cortex-M based device which has a hardware timer/counter called SYSTICK. SYSTICK increments at a rate related to the core clock frequency of the processor, in this case that frequency in counts-per-second is returned by CMU_ClockFreqGet(cmuClock_CORE).
The reload value of SYSTICK can be set, here that is done by SysTick_Config(). When the count reaches zero an interrupt is generated and the counter is reloaded. Here by setting the count to the SYSTICK frequency divided by 1000, you will get an interrupt every one millisecond.
An interrupt causes an associated handler to be called asynchronously to the normal code flow (the while loop in main in this case). So here SysTick_Handler() is called every 1 ms, incrementing msTicks (a count of elapsed milliseconds).
The Delay() function polls msTicks until dlyTicks have elapsed. curTicks is a snapshot of the value of msTicks at the start of the delay, the expression (msTicks - curTicks) < dlyTicks therefore becomes false after dlyTicks milliseconds (actually may be up-to 1ms less because msTicks is incremented asynchronously to anyDelay() call).

SysTick is the short name for System Timer. It's a timer generating a periodic interrupt. If the interrupt occurs, SysTick_Handler is called. In increases the variable msTicks by 1. Since the timer is configured to interrupt every millisecond (more about it later), msTicks represents the number of milliseconds since the microcontroller started.
Delay takes the current value of msTicks and loops (waits) until it has reached the initial value plus dlyTicks. The math in the function might look strange. But it's the proper way to do it overflow-safe.
SysTick_Config configures how frequently the system timer interrupt is trigger. This function takes the number of clock cycle as the period between interrupts. To make the period 1ms, the CPU core clock frequency (CMU_ClockFreqGet(cmuClock_CORE)) must be divided by 1000.

Related

why is there a delay between (DR register written) and (data really showed) in UART on STM32F103CB?

I'm curious about the delay time between the title mentioned, I toggled an IO when I wrote data into UART->DR, the delay time varies from 3 micro seconds to 10x micro seconds
int main(void)
{
/* initial code generated by STMCubeMX */
HAL_Init();
SystemClock_Config();
MX_GPIO_Init();
MX_USART1_UART_Init();
while (1)
{
HAL_Delay(50);
if (USART_GetFlagStatus(&huart1, USART_SR_TXE) == SET)
{
USART_SendData(&huart1, 'F');
}
}
}
void USART_SendData(UART_HandleTypeDef *huart, uint16_t Data)
{
assert_param(IS_USART_ALL_PERIPH(USARTx));
assert_param(IS_USART_DATA(Data));
GPIOB->BSRR = GPIO_PIN_1; // Tick an IO pin for debugging
GPIOB->BSRR = (uint32_t)GPIO_PIN_1 << 16u; // reset bit
huart->Instance->DR = (uint8_t)(Data & (uint8_t)0x00FF); // send data (write DR)
}
I'm not sure whether the time jitters is related with BAUD rate 9600(104 micro seconds/bit),
Isn't the data should be showed immediately when DR register written????
And why isn't the delay time all the same(or close)?
Isn't the data should be showed immediately when DR register written????
Not necessarily.
You are only showing us high-level language source code.
Have you looked at the actual instruction trace to determine the instruction time between these operations?
How do you ensure that no interrupt is serviced between these operations?
And why isn't the delay time all the same(or close)?
Apparently that depends on the design of the UART.
You report that the baudrate is 9600, and (as expected) the intervals for each bit appear to be slightly longer than 100microsec.
The fact that the observed latency is less than one bit interval is significant.
The typical UART uses a clock (aka the baudrate generator) that is 16 times faster than the configured baudrate.
This faster-than-necessary clock is needed to oversample the receiving signal, which can arrive at anytime, i.e. it's asynchronous communication after all.
For the transmit clock, the baudrate generator is divided down to the nominal baudrate.
So for transmission, that clock quantizes in time when each bit (of the frame) will start (and end) its transmission.
Since the write to the UART TxD data register is performed by the CPU, and that operation is not synchronized with the transmit clock, you should therefore expect a random delay of up to one bit interval before the start bit of the frame appears on the wire.

Implementing a delay using timers in STM32

Simply, I want to implement a delay function using stm32 timers, like the one in AVR microcontrollers "Normal mode". Can anybody help ? I just can't find that in the stm32 datasheet! It only supports PWM, input capture, output compare and one-pulse mode output!
N.B: I forgot to mention that I'm using stm32F401 microcontroller
You have very special timer for this purpose called SysTick. Set it to overflow every 1ms. In its handler
static volatile uint32_t counter;
void SysTick_Handler(void)
{
counter++;
}
inline uint32_t __attribute__((always_inline)) GetCounter(void)
{
return counter;
}
void Dealy(uint32_t ms)
{
uint32_t tickstart = GetCounter();
while((GetCounter() - tickstart) < ms);
}
If you want to be able to delay by periods shorter than milli-seconds
you can set up a timer to auto-reload and use the internal clock.
The internal clock frequency set to the timer is shown in the cube MX
clock tab. On my NUCLEO-F446RE most timers get 84MHz or 42MHz.
Timer 3 gets 42 MHz by default.
So if I set the pre scaler to 42 and the count period to maximum (0xFFFF
if 16 bit) I will have a cycling clock that changes every micro-second.
I can then, using a property of twos complement maths, simply subtract the
old time from the new to get the period.
void wait_us (int16_t delay) {
int16_t t1= htim3.Instance->CNT;
while (( htim3.Instance->CNT - t1 ) < delay) {
asm ("\t nop");
}
}
This is an old trick from PIC18 C coding

Translating 8051 microcontroller C code

void extrint (void) interrupt 0 // external Interrupt to detect the heart pulse
{
bt = tick; // number of ticks are picked
tick = 0; // reset for next counting
}
void timer0 (void) interrupt 1 using 1 // Timer 0 for one second time
{
TH0 = 0xdc; // The value is taken for Ssc/100 at crystal 11.0592MHz
sec100++; // It is incremented every Ssc/100 at crystal 11.0592MHz
tick++; // This variable counts the time period of incoming pulse in Sec/100
if (tick >= 3500)
{tick = 0;} // tick are limited to less than 255 for valid calculation
if (sec100 >= 100) // 1 sec = sec100 * 100
{
sec++;
sec100=0;
}
}
Can somebody explain me what the above code means and does. It was written for a 8051 microcontroller.
i got it from here
http://www.zembedded.com/heart-rate-beats-meter-with-microcontroller-at89c51-based-heartbeat-monitor/
Very hard to tell without context. I guess following:
The timer0 interrupt routine is called every 100th of a second. There it increments the tick counter which is reset to 0 as soon as it gets bigger than 3500. The sec counter seems to be a second counter as it is incremented every 100th call to timer0 (which is called 100 times per second).
The extrint seems to be called upon some external event. It just copies the actual value of tick into bt (presumably for some further processing) and resets tick to 0.

Arduino Nano Timers

I want to know more about Arduino Nano timers.
What timers are there?
Do they produce interrupts?
What code would attach an interrupt handler to them?
How is delay() and delayMicroseconds() implemented...
Do they use timer interrupts? (If so, how can I have other code execute during this?)
Or do they repeatedly poll until a timer reaches a certain value?
Or do they increment a value X number of times?
Or do they do it another way?
The best way to think about the Arduino Nano timers is to think about the timers in the underlying chip: the ATmega328. It has three timers:
Timer 0: 8-bit, PWM on chip pins 11 and 12
Timer 1: 16-bit, PWM on chip pins 15 and 16
Timer 2: 8-bit, PWM on chip pins 17 and 5
All of these timers can produce two kinds of interrupts:
The "value matched" interrupt occurs when the timer value, which is added to every tick of the timer reaches a comparison value in the timer register.
The timer overflow interrupt occurs when the timer value reaches its maximum value
Unfortunately, there is no Arduino function to attach interrupts to timers. To use timer interrupts you will need to write slightly more low-level code. Basically, you will need to declare an interrupt routine something like this:
ISR(TIMER1_OVF_vect) {
...
}
This will declare a function to service timer1 overflow interrupt. Then you will need to enable the timer overflow interrupt using the TIMSK1 register. In the above example case this might look like this:
TIMSK1 |= (1<<TOIE1);
or
TIMSK1 |= BV(TOIE1);
This sets the TOIE1 (generate timer1 overflow interrupt, please) flag in the TIMSK1 register. Assuming that your interrupts are enabled, your ISR(TIMER1_OVF_vect) will get called every time timer1 overflows.
The Arduino delay() function looks as follows in the source code (wiring.c):
void delay(unsigned long ms)
{
uint16_t start = (uint16_t)micros();
while (ms > 0) {
if (((uint16_t)micros() - start) >= 1000) {
ms--;
start += 1000;
}
}
}
So internally it uses the micros() function, which indeed relies on the timer0 count. The Arduino framework uses timer0 to count milliseconds, indeed, timer0 count is is where millis() function gets its value.
The delayMicroseconds() function, on the other hand, uses certain well-timed microprocessor operations to create the delay; which function is used depends on the processor and the clock speed; the most common being nop() (no operation) which takes exactly one clock cycle. Arduino Nano uses a 16 MHz clock, and here's what the source code looks like for that:
// For a one-microsecond delay, simply return. The overhead
// of the function call yields a delay of approximately 1 1/8 µs.
if (--us == 0)
return;
// The following loop takes a quarter of a microsecond (4 cycles)
// per iteration, so execute it four times for each microsecond of
// delay requested.
us <<= 2;
// Account for the time taken in the proceeding commands.
us -= 2;
What we learn from this:
1 µs delay does nothing (the function call is the delay)
Longer delays use the left shift operation to time the delay.

Understanding timer and period of interrupts

I am having a hard time understanding some code I found for using a timer and interrupts on an ARM board I have. The timer basically toggles an LED every interrupt between on and off to make it flash.
void main(void) {
/* Pin direction */
led_init();
/* timer setup */
/* CTRL */
#define COUNT_MODE 1 /* Use rising edge of primary source */
#define PRIME_SRC 0xf /* Peripheral clock with 128 prescale (for 24 MHz = 187500 Hz)*/
#define SEC_SRC 0 /* Don't need this */
#define ONCE 0 /* Keep counting */
#define LEN 1 /* Count until compare then reload with value in LOAD */
#define DIR 0 /* Count up */
#define CO_INIT 0 /* Other counters cannot force a re-initialization of this counter */
#define OUT_MODE 0 /* OFLAG is asserted while counter is active */
*TMR_ENBL = 0; /* TMRS reset to enabled */
*TMR0_SCTRL = 0;
*TMR0_CSCTRL = 0x0040;
*TMR0_LOAD = 0; /* Reload to zero */
*TMR0_COMP_UP = 18750; /* Trigger a reload at the end */
*TMR0_CMPLD1 = 18750; /* Compare one triggered reload level, 10 Hz maybe? */
*TMR0_CNTR = 0; /* Reset count register */
*TMR0_CTRL = (COUNT_MODE<<13) |
(PRIME_SRC<<9) |
(SEC_SRC<<7) |
(ONCE<<6) |
(LEN<<5) |
(DIR<<4) |
(CO_INIT<<3) |
(OUT_MODE);
*TMR_ENBL = 0xf; /* Enable all the timers --- why not? */
led_on();
enable_irq(TMR);
while(1) {
/* Sit here and let the interrupts do the work */
continue;
};
}
Right now, the LED flashes at a rate that I cannot determine per second. I'd like it to flash once per second. However, I do not understand the whole comparison and reloading.
Could somebody better explain this code?
As timers are a vendor- and part-specific feature (not a part of the ARM architecture), I can only give general guidance unless you mention which CPU or microcontroller you are dealing with.
Timers have several features:
A size, for instance 16 bits, which means they can count up or down to/from 65535.
A clock input, given as a clock frequency (perhaps from the CPU clock or an external crystal), and a prescaler which divides this clock frequency to another value (or divide by 1).
An interrupt on overflow - when the timer wraps back to 0, there is usually an option to trigger an interrupt.
A compare interrupt - when the timer meets a set value it will issue an interrupt.
In your case, I can see that you are using the compare feature of your timer. By determining your timer clock input, and calculating new values for the prescalers and compare register, you should be able to achieve a 1 Hz rate.
Before trying to understand the code you found, please do understand how a Timer Peripheral Unit works, then understand how you can configure it's registers to get the desired output.
How a Timer Peripheral Unit works?
This is hardware module which is embedded into micro controller along with CPU and other peripherals. Every peripheral modules inside micro controller are synchronized with common clock source. With reference to the code, Timer peripheral clock is 24 MHz which is then pre-scaled by 128 which means it will work at 187500 Hz. Now this frequency will depend upon clock configuration and oscillator.
Now Timer unit has a counter register which can count up to it's bit-size which could be 8,16 or 32 generally. Once you enable counting, this counter starts up-counting or down-counting the rising or falling or on both edges. Now you have choices whether you want to up-count (from 0 towards 255, for 8-bit) or down count (from 255 towards 0) and you want to count on which clock edge.
Now, at 187500 Hz, 1 cycle = 5.333333 us, if you are counting once in 1 cycle either at rising or at falling edge and e.g, if counter value = 100 (Up counting), total time elapsed is 5.33333*100=533us. Now you have to set a compare value value for the counter to set this period which will depend upon your flash rate. This compare value will be compared against your counter value in by comparator of Timer and Once it matches it will send an interrupt signal if you have enabled interrupt generation on compare match, where you can toggle you LED.
I hope you have understood How a Timer works.
In your sample code, Timer is configured to obtain a compare match event at the rate of 10Hz. so compare value is 187500/10 = 18750., for 1sec you can keep it 187500/1.
you have Timer Control Register TMR0_CTRL, where you can configure whether you want to count up or down, count on falling/rising/both edges, count only once/continuous, count upto compare value and then reset or keep counting till it's limit. Refer to micro controller manual for details of each bit fields.

Resources