I am currently making some additions to some code for a STM32F3 MCU that I did not write. Part of what the product will do is blink LEDs at 1Hz while testing until the MCU determines if the test is pass or fail.
The previous writer of this code implemented two delays; one that is a for loop based off the clock rate, the other off a timer interrupt. The first delay gets called like this
MsDelay(1000); //Provides approx a one second delay via a for loop
The other like this
Wait(x,y); //x indicates the timer channel, y is delay in milliseconds
The "wait(x,y)" function goes through a clock tick check and some incrementing to implement this delay.
The problem I'm having is in the LED blinking I talked about earlier. To test this function, in the ISR for the test, I call
Set_Led(1,1); //Turns LED1 ON
Wait(1,1000); //Wait one second
Set_Led(2,1); //Turns LED2 ON
However, there is not a one second delay between the LEDs turning on. However, if I call,
Set_Led(1,1); //Turns LED1 ON
MsDelay(1000); //Wait one second
Set_Led(2,1); //Turns LED2 ON
There is a one second delay. I am not very familiar with the differences of blocking vs non-blocking delays other than blocking takes all processing power away from other peripherals. Should there not be a pause between the LEDs turning when using the non-blocking delay? Thank You.
Related
so i'm using an STM32F4 based bare bone board (Black Pill) to run a program for my project
i m using the STM32CubeIDE for code generating
Current Overtime cases explanatory
the figure you just saw, is a graph i made simply on paint to explain the post
my project revolve around inductance load protection against short circuits, (doesn't matter but just clarification)
i m using interrupts, where the first interrupt triggers once the current reaches a reference 1 value
second interrupt triggers once the if reaches Value Reference 1
since current noises can't be filtered in my case, I have to avoid the triggering of instruction of int 2
there for I put a delay that is a bit bigger then the noise period (about 100ns)
if delay ended and int trigger is still on (high) , shut down the system (change the output)
if delay ended and int trigger is off (low), keep the system running (keep initial output)
this is the code i came up with so far
enter code here
I believe what you're looking for is a "Timer" and some interrupt handling magic. I will expand a little.
If your interrupt is OFF (in NVIC only, the rest is configured), but an interrupt triggering event occurred, the interrupt will NOT fire (obviously). But if you enable the interrupt in NVIC after that, it will fire immediately.
Example:
You set up a GPIO as input, you setup EXTI (external interrupt) and SYSCFG (binding port to EXTI), basically, you make a rising edge interrupt
In NVIC the corresponding interrupt is OFF
Rising edge happens on GPIO, immediately goes back down to LOW
You enable an interrupt in NVIC
Interrupt fires (even if the input never had a rising edge after NVIC interrupt was turned on)
My idea is the following.
In the interrupt 1 handler, you do 2 things.
Disable interrupt 2 in NVIC
Launch a delay via Timer with interrupt.
When interrupt 1 fires, it immediately disables interrupt 2 and enables timer. The timer eventually fires its own interrupt, where it enables interrupt 2 in NVIC. If interrupt 2 event happened, the interrupt 2 handler will be called immediately. If not, interrupt 2 will not fire.
During all this waiting your MCU is free to do whatever it wants, full interrupt implementation.
Hi I'm using STM32G070 nucleo board and I was trying to learn how to use UART with interrupts. I made a simple led blinking program in the main and in the interrupt handler there is a simple echo program. The echo works fine but the led blinking never starts.
Below is my code:
//while loop for led blinking
while(1)
{
GPIOA->BSRR |= (GPIO_BSRR_BS5);
//delaySys is a blocking delay function that takes milli seconds as input
delaySys(1000);
GPIOA->BSRR &=~(GPIO_BSRR_BS5);
GPIOA->BSRR |= (GPIO_BSRR_BR5);
delaySys(1000);
}
Next is my interrupt handler
void USART2_IRQHandler()
{
if((USART2->ISR) & (USART_ISR_RXNE_RXFNE))
{
USART2->TDR = USART2->RDR;
while(!((USART2->ISR) & (USART_ISR_TC)))
;
NVIC_ClearPendingIRQ(USART2_IRQn);
}
if(((USART2->ISR) & (USART_ISR_TXE_TXFNF)))
{
//TRANSMISSION COMPLETE
}
}
If you expect a subsequent stream of bytes from the UART, it might sometimes be justified to hang in the ISR and poll, until you got the expected amount. However, the rest of the program will hang up and wait while you do.
You shouldn't need to call NVIC_ClearPendingIRQ from inside the ISR, because the flag causing the interrupt should be cleared automatically, typically by reading it followed by a data register access or such. Check the UART part of the manual for register descriptions.
A better but more complex way to deal with UART rx interrupts without stalling is to use DMA. I think most STM32 should have support for this, but I'm not sure.
Looking at your blinking code however, it's problematic:
// light up 1000ms
GPIOA->BSRR |= (GPIO_BSRR_BS5);
delaySys(1000);
// lights off
GPIOA->BSRR &=~(GPIO_BSRR_BS5);
// light up some other LED? NOTE: BR5 not BS5
GPIOA->BSRR |= (GPIO_BSRR_BR5);
delaySys(1000);
This can do one of two things:
If GPIO_BSRR_BS5 and GPIO_BSRR_BR5 are the same bit masks, then it lits up the LED approximately 1000ms and turns it off for approximately 50-100ns. The LED stays lit around 99.99% of the time.
If GPIO_BSRR_BS5 and GPIO_BSRR_BR5 are different bit masks, then one LED will blink with 1000ms duty cycle and another stay lit 100% of the time.
Okay I did fix the problem somehow. I had enabled the interrupt for tx before but when I have disabled it, everything works fine. Seems like Tx was sending an interrupt request and was competing with main.
I want to solve a very particular problem with an Arduino, running on a ATmega328
I have timer1 run in fast PWM mode for generating a PWM Signal with given Period an duty cycle. The duty cycle is set by OCR1A: So at timer restart the level of the output pin gets high and after the duty period it gets low. This works.
Additionally I want to carry out an analog measurement exactly some time after the rising signal. So I enabled the OCR1B interrupt and define the time by writing to OCR1B. When the timer reaches the value in OCR1B the interrupt handler is invoked and the measurement done. This works.
Now I want to do this ADC conversion twice at different counter times, say my_OCRB1 and my_OCRB2. But there is only on OCR1B Register I can use. Is it ok to prepare the content of OCR1B to the next value my_OCRB2 after the first ADC conversion at is finished? Will it work and again rise an interrupt, when the timer is still counting up?
Is there a better solution?
I am working with a STM8 timer (not my code, but maintaining it) and in it it uses a timer. Apparently the clock is set at 16MHz erfo 0.0625uS. The settings of the timer are ARRH=0x03 ARRL=0x20 therefore (0x0320=800) it resets at 800 (ergo 50us)
PSCR is set at 0 so the timer has the same freq as the micro.
Anyway, when checking this with an oscilloscope, it does not give good readings.
The timer interrupt is called at:
56us , 54uS, 54uS, 52uS, 52uS, 52us, 38us(!!!), 42us(?), 50us, 50us....
curiosly summed up it gives 500uS so it does count as 10 times 50uS
The first 8 times at the timer interrupt some AD conversion is happening so there is the possibility that an AD interrupt is being called in between too.
1) Do you think this is affecting the frequency of the timer?
2) why does it "correct" itself by firing an interrupt at 38uS??
I would appreciate any comment based on your embedded or STM8 experience, since I know precise answers would need to examine the code...
I'm not sure if you still need an answer. I once had the same and searched for a long time... simple solution in my case:
I had an ADC ISR with high jitter. That came from my main loop. In some sub-sub-sub routine the ADC interrupt was temporarily deactivated for a critical section (data transfer between interrupt and main loop context). The effect is exactly what you discribe:
Sometimes the time between two interrupts is longer, because the interupt is pending and waiting for execution until the interrupt is enabled again. The timer is still continuing to run. Timing example:
interrupt is disabled in main loop (or sub routine)
interrupt flag is set by timer -> interrupt pending
interrupt is enabled again -> ISR is executed too late
interrupt is disabled in main loop
interrupt flag is set by timer -> pending
interrupt is enabled again -> ISR is executed much too late
main loop does NOT disable interrupt for some case (maybe by control flow, maybe timing issue)
The next interrupt is executed at the right time which is 50 us after raising the last interrupt, NOT 50 us after calling the last ISR. --> time between ISR calling is shortened.
I hope I could help.
This is my first post at this forum.
I am developing a MIDI sequencer device based on a STM32F429DISCOVERY board running at stock 180MHz. In order to send midi messages the USART1 is configured for 31250 bauds and the appropriate DMA is configured to transfer a 3 byte array stored in ram to the USART. I was doing tests of even timing of sending of midi messages, by configuring the Timer 4 update interrupt, within the service routine of which I am enabling the memory-to-peripheralUSART1 DMA operation. This gives me a periodic sending of a 3 byte message over the USART1 peripheral.
Everything works great and with correct frequency and correct data, but i have a small issue which i have been researching for few days now and have not been able to correct. To make things clearer, within the timer interrupt routine I set a led on the discovery (RG13) to momentarily blink and connected 1 channel of an oscilloscope to the led pin. The second channel of the oscilloscope is connected to the USART TX pin. Now, when the code is executed, i can see the led pulse on the oscilloscope's CH1, followed by the USART serial data on the CH2. But for some reason the time between the led pulse and the beginning of the serial data transfer fluctuates with every sending of the data. It increments with every sending, going from around 1uS to around 30uS, and then jumps back to 1.
I noticed that if i slightly change the USART baudrate, the time fluctuation between the pulse and the data sending changes in pattern, going faster or slower and with longer or shorter range.
I have tried resetting all the apropriate flags from USART as well as DMA, have tried to disable/enable the timer, played with interrupt priorities, but nothing has worked to get rid of the time fluctuation.
As you can imagine, the stability of this is crucial for a MIDI sequencer hardware application as it bases the timing of the musical events, which must be rock solid.
I have also tried using the USART by itself without DMA, manually sending every byte, basically same results. Interrupt driven USART TX exhibited likewise results.
The only thing which seemed to work to get rid of the time fluctuation of USART TX response is, before every sending operation to deinitialize USART and the DMA modules and reinitialize them again. This seemed to give a stable operation but inserts a long delay between the timer interrupt and the actual sending of the data over the USART, which is unacceptable.
If anyone has any thoughts on this or have done anything similar, I need an advice on where to look at.
Thanks a lot in advance!
Best regards,
Konstantin
Even based on your detailed description, there are various possibilities for errors, so best I can do is guess:
Maybe just one of the TIM setting is just slightly wrong: What about the timer's auto-reload register (TIM4_ARR)?
The period setting must be just one unit lower than the desired transmission period divided by the (possibly prescaled) clock period (see details upcounting/downcounting spec).
Now, if the reload value were just equal to the value instead, the second trigger would be late by one tiny period, the third trigger twice as much and so on (which may look like what you described).
This "ramp of delays" would then rise until the unwanted delay sums up to one UART bit period (which happens to be 32uS for 31250 bauds, quite near to the "around 30uS" you described). The next trigger would then just fit for the neighbouring UART bit cycle (without much delay).
Comparing this hypothesis with your other findings...
Changing the UART baud rate would preserve the fundamental error, but the duration of the irritating delay changes. It can appear to change its sign ("faster or slower"), depending on the beat characteristics between the (actual) TIM period and the UART bit period. => OK
Changing the event processing from DMA to IRQ handler wouldn't change much about the problem but only the "phase" of the initial delay (by the time the CPU needs to execute a different ST library function). => OK
Disabling and re-enabling the UART might have changed the behaviour because the UART clock might re-synchronize newly with the underlying bus clock (APB2 for USART2), so the delay after the TIM trigger would appear constant, and you wouldn't notice fluctuations. => OK