STM32 maximum interrupt handling frequency - arm

I am trying to implement my own SPI communication from FPGA to STM in which my FPGA serve as MASTER and generate Chip enable and clock for communication. FPGA transmit data at its rising edge and receive data at its falling edge my FPGA code works properly.
In STM side i capture this master clock on interrupts and receive data at its rising edge and transmit at its falling edge but communication not work properly if i increase clock speed from 250khz
According to my understand STM work at 168 Mega hz i set clock setting according to 168Mhz and handling of 1mhz interrupt is not a big problem so can you any guide how i handle this high speed clock in STM
My code is written below
/*
* Project name:
EXTI_interrupt (EXTI interrupt test)
* Copyright:
(c) Mikroelektronika, 2011.
* Revision History:
20111226:
- Initial release;
* Description:
This code demonstrates how to use External Interrupt on PD10.
PD10 is external interrupt pin for click1 socket.
receive data from mosi line in each rising edge.
* Test configuration:
MCU: STM32F407VG
http://www.st.com/st-web-
ui/static/active/en/resource/technical/document/datasheet/DM00037051.pdf
dev.board: EasyMX PRO for STM32
http://www.mikroe.com/easymx-pro/stm32/
Oscillator: HSI-PLL, 140.000MHz
Ext. Modules: -
SW: mikroC PRO for ARM
http://www.mikroe.com/mikroc/arm/
* NOTES:
receive 32 bit data from mosi line in each rising edge
*/
//D10 clk
//D2 ss
//C0 MOSI
//C1 FLAG
int read=0;
int flag_int=0;
int val=0;
int rec_data[32];
int index_rec=0;
int display_index=0;
int flag_dint=0;
void ExtInt() iv IVT_INT_EXTI15_10 ics ICS_AUTO {
EXTI_PR.B10 = 1; // clear flag
flag_int=1; //Flag on interrupt
}
TFT_Init_ILI9340();
void main() {
GPIO_Digital_Input(&GPIOD_BASE, _GPIO_PINMASK_10);
GPIO_Digital_Output(&GPIOD_BASE, _GPIO_PINMASK_13); // Set PORTD as
digital output
GPIO_Digital_Output(&GPIOD_BASE, _GPIO_PINMASK_12); // Set PORTD as
digital output
GPIO_Digital_Output(&GPIOD_BASE, _GPIO_PINMASK_14); // Set PORTD as
digital output
GPIO_Digital_Output(&GPIOD_BASE, _GPIO_PINMASK_15); // Set PORTD as
digital output
GPIO_Digital_Input(&GPIOA_IDR, _GPIO_PINMASK_0); // Set PA0 as
digital input
GPIO_Digital_Input(&GPIOC_IDR, _GPIO_PINMASK_0); // Set PA0 as
digital input
GPIO_Digital_Input(&GPIOC_IDR, _GPIO_PINMASK_2); // Set PA0 as
digital input
GPIO_Digital_Output(&GPIOC_IDR, _GPIO_PINMASK_1); // Set PA0 as
digital input
//interupt register
SYSCFGEN_bit = 1; // Enable clock for alternate pin
functions
SYSCFG_EXTICR3 = 0x00000300; // Map external interrupt on PD10
EXTI_RTSR = 0x00000000; // Set interrupt on Rising edge
(none)
EXTI_FTSR = 0x00000400; // Set Interrupt on Falling edge
(PD10)
EXTI_IMR |= 0x00000400; // Set mask
//NVIC_IntEnable(IVT_INT_EXTI15_10); // Enable External interrupt
while(1)
{
//interrupt is not enable until i push the button
if((GPIOD_ODR.B2==0)&&(flag_dint==0))
{ if (Button(&GPIOA_IDR, 0, 1, 1))
{
Delay_ms(100);
GPIOC_ODR.B1=1; //Status for FPGA
NVIC_IntEnable(IVT_INT_EXTI15_10); // Enable External interrupt
}
}
if(flag_int==1)
{
//functionality on rising edge
flag_int=0;
if(index_rec<31)
{
//display data on led
GPIOD_ODR.B13= GPIOC_IDR.B0;
//save data in an array
rec_data[index_rec]= GPIOC_IDR.B0;
//read data
index_rec=index_rec+1;
}
else
{
flag_dint=1;
NVIC_IntDisable(IVT_INT_EXTI15_10);
}
} // Infinite loop
}
}

Without getting into your code specific, see PeterJ_01's comment, the clock rate problem can be explained by a misunderstanding of throughput in your assumtions.
You assume that given that your STM device has a clock of 168Mhz it can sustain the same throughput of interrupts, which you seem to have conservatively relaxed to 1Mhz.
However the throughput of interrupts it will be able to support is given by the inverse of the time it takes the device to process each interrupt. This time includes both the time the processor takes to enter the service routing (ie detect the interrupt, interrupt the current code and resolve from the vector table where to jump to) plus the time taken to execute the service routine.
Lets be super optimistic and say that entering the routine takes 1 cycle and the routing itself takes 3 (2 for the flags you set and 1 for the jump out of the routine). This gives 4 cycles at 168Mhz is 23.81ns, taking the inverse 42Mhz. This can also be computed by dividing the maximum frequency you would achieve (168Mhz) by the number of cycles spent processing.
Hence our really optimistic bound is 42Mhz, but realistically will be lower. For a more accurate estimate you should test your implementation timings and dig into your device's documentation to see interrupt response times.

Related

why is there a delay between (DR register written) and (data really showed) in UART on STM32F103CB?

I'm curious about the delay time between the title mentioned, I toggled an IO when I wrote data into UART->DR, the delay time varies from 3 micro seconds to 10x micro seconds
int main(void)
{
/* initial code generated by STMCubeMX */
HAL_Init();
SystemClock_Config();
MX_GPIO_Init();
MX_USART1_UART_Init();
while (1)
{
HAL_Delay(50);
if (USART_GetFlagStatus(&huart1, USART_SR_TXE) == SET)
{
USART_SendData(&huart1, 'F');
}
}
}
void USART_SendData(UART_HandleTypeDef *huart, uint16_t Data)
{
assert_param(IS_USART_ALL_PERIPH(USARTx));
assert_param(IS_USART_DATA(Data));
GPIOB->BSRR = GPIO_PIN_1; // Tick an IO pin for debugging
GPIOB->BSRR = (uint32_t)GPIO_PIN_1 << 16u; // reset bit
huart->Instance->DR = (uint8_t)(Data & (uint8_t)0x00FF); // send data (write DR)
}
I'm not sure whether the time jitters is related with BAUD rate 9600(104 micro seconds/bit),
Isn't the data should be showed immediately when DR register written????
And why isn't the delay time all the same(or close)?
Isn't the data should be showed immediately when DR register written????
Not necessarily.
You are only showing us high-level language source code.
Have you looked at the actual instruction trace to determine the instruction time between these operations?
How do you ensure that no interrupt is serviced between these operations?
And why isn't the delay time all the same(or close)?
Apparently that depends on the design of the UART.
You report that the baudrate is 9600, and (as expected) the intervals for each bit appear to be slightly longer than 100microsec.
The fact that the observed latency is less than one bit interval is significant.
The typical UART uses a clock (aka the baudrate generator) that is 16 times faster than the configured baudrate.
This faster-than-necessary clock is needed to oversample the receiving signal, which can arrive at anytime, i.e. it's asynchronous communication after all.
For the transmit clock, the baudrate generator is divided down to the nominal baudrate.
So for transmission, that clock quantizes in time when each bit (of the frame) will start (and end) its transmission.
Since the write to the UART TxD data register is performed by the CPU, and that operation is not synchronized with the transmit clock, you should therefore expect a random delay of up to one bit interval before the start bit of the frame appears on the wire.

How to acheive Adafruit Feather M0 Sleep & Wake on External Interrupt using any Pin?

I am currently working on a low-power project using the Adafruit Feather M0 microprocessor. A requirement of my project is to be able to sleep the CPU and wake it again using an external interrupt triggered from the MPU6050 accelerometer.
I have tested the following code sample from GitHub - it works successfully! The question that I need answering is how to I alter this sample code to work on Pin 13 of the feather, rather than pin 6.
#define interruptPin 6
volatile bool SLEEP_FLAG;
void EIC_ISR(void) {
SLEEP_FLAG ^= true; // toggle SLEEP_FLAG by XORing it against true
//Serial.print("EIC_ISR SLEEP_FLAG = ");
//Serial.println(SLEEP_FLAG);
}
void setup() {
// put your setup code here, to run once:
Serial.begin(9600);
delay(3000); // wait for console opening
attachInterrupt(digitalPinToInterrupt(interruptPin), EIC_ISR, CHANGE); // Attach interrupt to pin 6 with an ISR and when the pin state CHANGEs
SYSCTRL->XOSC32K.reg |= (SYSCTRL_XOSC32K_RUNSTDBY | SYSCTRL_XOSC32K_ONDEMAND); // set external 32k oscillator to run when idle or sleep mode is chosen
REG_GCLK_CLKCTRL |= GCLK_CLKCTRL_ID(GCM_EIC) | // generic clock multiplexer id for the external interrupt controller
GCLK_CLKCTRL_GEN_GCLK1 | // generic clock 1 which is xosc32k
GCLK_CLKCTRL_CLKEN; // enable it
while (GCLK->STATUS.bit.SYNCBUSY); // write protected, wait for sync
EIC->WAKEUP.reg |= EIC_WAKEUP_WAKEUPEN4; // Set External Interrupt Controller to use channel 4 (pin 6)
PM->SLEEP.reg |= PM_SLEEP_IDLE_CPU; // Enable Idle0 mode - sleep CPU clock only
//PM->SLEEP.reg |= PM_SLEEP_IDLE_AHB; // Idle1 - sleep CPU and AHB clocks
//PM->SLEEP.reg |= PM_SLEEP_IDLE_APB; // Idle2 - sleep CPU, AHB, and APB clocks
// It is either Idle mode or Standby mode, not both.
SCB->SCR |= SCB_SCR_SLEEPDEEP_Msk; // Enable Standby or "deep sleep" mode
SLEEP_FLAG = false; // begin awake
// Built-in LED set to output and high
PORT->Group[g_APinDescription[LED_BUILTIN].ulPort].DIRSET.reg = (uint32_t)(1<<g_APinDescription[LED_BUILTIN].ulPin); // set pin direction to output
PORT->Group[g_APinDescription[LED_BUILTIN].ulPort].OUTSET.reg = (uint32_t)(1<<g_APinDescription[LED_BUILTIN].ulPin); // set pin mode to high
Serial.println("Setup() Run!");
}
void loop() {
// put your main code here, to run repeatedly:
if (SLEEP_FLAG == true) {
PORT->Group[g_APinDescription[LED_BUILTIN].ulPort].OUTCLR.reg = (uint32_t)(1<<g_APinDescription[LED_BUILTIN].ulPin); // set pin mode to low
Serial.println("I'm going to sleep now.");
__WFI(); // wake from interrupt
SLEEP_FLAG = false;
Serial.println("Ok, I'm awake");
Serial.println();
}
//Serial.print("SLEEP_FLAG = ");
//Serial.println(SLEEP_FLAG);
PORT->Group[g_APinDescription[LED_BUILTIN].ulPort].OUTTGL.reg = (uint32_t)(1<<g_APinDescription[LED_BUILTIN].ulPin); // toggle output of built-in LED pin
delay(1000);
}
As per the pinout diagram and Atmel datasheet, I am struggling to work out which changes to make to allow pin 13 to operate in the same way as pin 6.
Atmel Datasheet
The obvious solution is to change the following lines...
#define interruptPin 13
EIC->WAKEUP.reg |= EIC_WAKEUP_WAKEUPEN1; // Set External Interrupt Controller to use channel 4 (pin 6)
I suspected channel 1 (WAKEUPEN1) due to the ENINT^1 next to pin 13 on the pinout diagram. But this didn't work, the code pin operation did not exhibit the same behaviour as the pin 6 setup.
I would be very grateful for any suggestion of how to implement this code working on Pin 13. Many thanks for your support.
I'm not an authority here, and your code looks correct to me.
Except, the pin out shows Pin 13 is the built-in LED line, and you manipulate LED_BUILTIN several places in your code. That's almost certainly conflicting with your attempts to use 13 as an interrupt line.

How to properly configure the USART_BRR register in STM32L476RG uC?

I'm trying to write my own driver for USART_TX on an STM32L476RG Nucleo Board.
Here the datasheet and the reference manual.
I'm using Keil uVision 5 and I set in the Manage dialog:
CMSIS > Core
Device > Startup
Xtal=16MHz
I want to create a single character transmitter. According to the manual instructions in Sec. 40 p 1332 I wrote this code:
// APB1 connects USART2
// The USART2 EN bit on APB1ENR1 is the 17th
// See alternate functions pins and label for USART2_TX! PA2 is the pin and AF7 (AFRL register) is the function to be set
#include "stm32l4xx.h" // Device header
#define MASK(x) ((uint32_t) (1<<(x)));
void USART2_Init(void);
void USART2_Wr(int ch);
void delayMs(int delay);
int main(void){
USART2_Init();
while(1){
USART2_Wr('A');
delayMs(100);
}
}
void USART2_Init(void){
RCC->APB1ENR1 |= MASK(17); // Enable USART2 on APB1
// we know that the pin that permits the USART2_TX is the PA2, so...
RCC->AHB2ENR |= MASK(0); // enable GPIOA
// Now, in GPIOA 2 put the AF7, which can be set by placing AF7=0111 in AFSEL2 (pin2 selected)
// AFR[0] refers to GPIOA_AFRL register
// Remember: each pin asks for 4 bits to define the alternate functions. see pg. 87
// of the datasheet
GPIOA->AFR[0] |= 0x700;
GPIOA->MODER &= ~MASK(4);// now ... we set the PA2 directly with moder as alternate function "10"
// USART Features -----------
//USART2->CR1 |=MASK(15); //OVER8=1
USART2->BRR = 0x683; //USARTDIV=16Mhz/9600?
//USART2->BRR = 0x1A1; //This one works!!!
USART2->CR1 |=MASK(0); //UE
USART2->CR1 |=MASK(3); //TE
}
void USART2_Wr(int ch){
//wait when TX buffer is empty
while(!(USART2->ISR & 0x80)) {} //when data is transfered in the register the ISR goes 0x80.
//then we lock the procedure in a while loop until it happens
USART2->TDR =(ch & 0xFF);
}
void delayMs(int delay){
int i;
for (; delay>0; delay--){
for (i=0; i<3195; i++);
}
}
Now, the problem:
The system works, but not properly. I mean: if I use RealTerm at 9600 baud-rate, as configured by 0x683 in USART_BRR reg, it shows me wrong char but if I set 2400 as baud rate on real term it works!
To extract the 0x683 in USART_BRR reg i referred to Sec. 40.5.4 USART baud rate generation and it says that if OVER8=0 the USARTDIV=BRR. In my case, USARTDIV=16MHz/9600=1667d=683h.
I think that the problem lies in this code row:
USART2->BRR = 0x683; //USARTDIV=16Mhz/9600?
because if I replace it as
USART2->BRR = 0x1A1; //USARTDIV=16Mhz/9600?
THe system works at 9600 baud rate.
What's wrong in my code or in the USARTDIV computation understanding?
Thank you in advance for your support.
Sincerely,
GM
The default clock source for the USART is PCLK1 (figure 15) PCLK1 is SYSCLK / AHB_PRESC / AHB1_PRESC. If 0x1A1 results in a baud rate of 9600, that suggests PCLK1 = 4MHz.
4MHz happens to be the default frequency of your processor (and PCLK1) at start-up when running from the internal MSI RC oscillator. So the most likely explanation is that you have not configured the clock tree, and are not running from the 16MHz HSE as you believe.
Either configure your clock tree to use the 16MHz source, or perform your calculations on the MSI frequency. The MSI precision is just about good enough over normal temperature range to maintain a sufficiently accurate baud rate, but it is not ideal.

How to get millisecond resolution from DS3231 RTC

How to get accurate milliseconds?
I need to calculate the delay of sending data from Arduino A to Arduino B. I tried to use DS3231 but I cannot get milliseconds. What should I do to get accurate milliseconds from DS3231?
The comment above is correct, but using millis() when you have a dedicated realtime clock makes no sense. I'll provide you with better instructions.
First thing in any hardware interfacing project is a close reading of the datasheet. The DS3231 datasheeet reveals that there are five possible frequencies of sub-second outputs (see page 13):
32 KHz
1 KHz
1.024 KHz
4.096 KHz
8.192 KHz
These last four options are achieved by various combinations of the RS1 and RS2 control bits.
So, for example, to get exact milliseconds, you'd target option 2, 1KHz. You set RS1 = 0 and RS2 = 0 (see page 13 of the datasheet you provided) and INTCN = 0 (page 9). Then you'd need an ISR to capture interrupts from the !INT/SQW pin of the device to a digital input pin on your Arduino.
volatile uint16_t milliseconds; // volatile important here since we're changing this variable inside an interrupt service routine:
ISR(INT0_vect) // or whatever pin/interrupt you choose
{
++milliseconds;
if(milliseconds == 999) // roll over to zero
milliseconds = 0;
}
OR:
const int RTCpin = 3; // use any digital pin you need.
void setup()
{
pinmode(RTCpin, INPUT);
// Global Enable INT0 interrupt
GICR |= ( 1 < < INT0);
// Signal change triggers interrupt
MCUCR |= ( 1 << ISC00);
MCUCR |= ( 0 << ISC01);
}
If these commands in setup() don't work on your Arduino, google 'Arduino external interrupt INT0'. I've shown you two ways, one with Arduino code and one in C.
Once you have this ISR working and pin3 of the DS3231 connected to a digital input pin of your choosing, that pin will be activated at 1KHz, or every millisecond. Perfect!
// down in main program now you have access to milliseconds, you might want to start off by setting:
// When 1-second RTC changes seconds:
milliseconds = 0; // So you can measure milliseconds since last second.
That's all there is to it. All you need to learn now is how to set the command register using I2C commands and you're all set.
The C code example gains 1ms every second. Should be:
{
if (milliseconds == 999) // roll over to zero
milliseconds = 0;
else
++milliseconds;
}

microcontroller TMR0 timer counter interrupt

I am programming the microcontroller PIC16F676 SPI interface with MCP2515. It will set a flag in every 224ms, and timercounter will increase from 0*F8 to 0*FF then overflow to set this flag. Therefore, 32ms * 07H = 224ms. The question is how to let the timer interrupt every 32ms,WHERE this 32ms comes from.
//Timer interrupts every 32ms and set a flag every 224ms (32ms * 07H = 224ms)
//Initial value = FFH - 07H = F8H
if(T0IF) //TMR0 overflow interrupt flag bit
{
TimerCounter++;
if(!TimerCounter) //if rolled over, set flag. Main will handle the rest.
{
TimerCounter = 0*F8;
gSampleFlag = 1;
}
T0IF = 0; //reset for next flag
}
The 32 ms period of the timer is determined by the configuration of the timer, which is not included in your code excerpt (i.e., it may be done elsewhere in your code). Read section 4.0 of the PIC16F630/676 datasheet, which explains the TIMER0 Module.
Timer0 is configured as follows:
T0CS is either:
cleared to select the internal clock source (Timer mode) for Timer0,
or set to select the external clock source (Counter mode).
T0IE is set to enable the Timer0 interrupt.
The prescaler may be used in conjunction with the internal clock source to adjust the tick rate of Timer0
PSA is cleared to assign the prescaler to Timer0.
PS2:PS0 are set to select the prescaler rate.
So either the external clock source or the internal clock source and prescaler determine the tick rate of Timer0.
32ms is the time period of clock source which your timer counter is counting for 0x07 times.
Your timer unit is synchronised with common clock source which is derived from either external crystal or internal oscillator. During clock configuration you need to decide what should be peripheral bus clock through which your timer unit is interfaced. While configuring timer unit you can further divide this peripheral clock to less frequency using prescaler to increase the range of period.
Now suppose your peripheral bus clock frequency is 1MHz and your prescaler is 1, your counter will increment or decrement every 1us and for 0x07 count, it will generate only 7us of period. In your example you need to set prescaler 32000 (if it allowed to set) as reference clock for counting so that 1 count means 32ms.

Resources