Unexpected Serial communication using timer interrupt in with ARM 32 bit - timer

My code is for serial communication and led blinking using timer interrupt, where led is toggle at a fix interval of time with some line prints on serial terminal and for fix time i have used timer.
Scenario:
I am sending data "while" on the serial terminal in continuous manner and when timer is expired, interrupts occurs and my led toggles with again serial data "RED LED blinked for 500 ms" on terminal. Ideally this timer function should get called only when timer get expired and rest of the time my while loop should execute and while loop data should be printed on the terminal.
But this is not happening, the serial prints are not like as it should be printed.
Let me know what is the actual reason behind this.
I have changed baud-rate(115200 to 9600), and output was quite similar.
But if u print only 'w' instead of "while" then also behaviour is similar, but timer "RED LED blinked for 500 ms" prints 2 times.
Not getting clear idea where timer is busy for this behavior.
Below is my code:
define d9 "while\r\n"
int main(void) {
//Serial Communication Inilization done
//Timer Initialization done
while (1)
{
// This line prints continuously on the terminal
LPUART_DRV_SendDataPolling(INST_LPUART1, (uint8_t *) d9, strlen(d9));
}
}
Actual Output:
...
while
while
while
wRED LED blinked for 500 ms
hRED LED blinked for 500 ms
iRED LED blinked for 500 ms
lRED LED blinked for 500 ms
eRED LED blinked for 500 ms
while
while
...
Expected Output:
...
while
while
while
RED LED blinked for 500 ms
while
while
...

The actual output you present shows that the string "RED LED blinked for 500 ms" (=: "the ISR string") interrupts the string "while" (=: "the main loop string").
It does so with a monotonic frequency (leaving a single character of the main loop string pass between two calls), which matches perfectly with a reload timer and the monotonic behaviour inside the main loop.
This strongly supports (I'd almost say, "it proves...") the assumption of the two comments of #markus-nm and #Chris Stratton, that you have a concurrency problem.
Their hints are correct - the easiest solution to fix it is getting a mutex (or simplified: disabling interrupts) before accessing the global resource where the two data sources shall be merged. The resource where your concurrency problem originates may be a buffer in RAM or even the UART port register itself. In case you are still new to concurrent programming, please watch out not to run into sync/deadlock problems when you add further mutexes around the features all over the system without experience or background knowledge.
A more powerful (but more sophisticated) solution to your concurrency problem would be to use an RTOS and have a small task only output to UART what other context send to its input queue. Then, the RTOS takes care of concurrency. But, any details of this strategy are beyond the scope of your question.

Related

Log data from MPU6050 through serial (UART) fails (data loss)

here is the problem I am facing. I have interfaced my ATmega328P with a 6-axis IMU (MPU6050 with the GY521 breakout board). I can read data through the TWI interface (Atmel's I2C) and send it to my PC (running Ubuntu) via the UART. I am using custom-built libraries for both these communication protocols, but they are pretty standard and seem to work just fine. The goal of the project is to compute orientation data from the IMU readings in real-time, say at 100 Hz.
The main problem is that I cannot log data from the device at 100 Hz (not even at 50 Hz). The orientation filter I am using (here) requires a quite high frequency and 100 Hz turned out to work fine (tested offline acquiring data from another device).
Right now, I am using the 16-bit timer of the ATmega328P to sample data at 100 Hz and this seem to work, as I have added to the ISR a line to toggle the built-in LED and it looks to me that it is blinking at 100 Hz (I can barely see it turning on and off). In the same ISR, I read the values from the inertial sensor and, just to log them, send these values through the serial port. Every 10 ms (maximum), I send 9 floats (36 bytes) with a baud rate of 115200. If I use the Arduino IDE's Serial Monitor to visualize this data stream, I notice something very weird, as in the following screenshot.
https://imgur.com/zTBdkhv
As you notice taking a look at the timestamps, there is a common 33 ms delay every 2 or 3 sets of samples received. Moreover, I get roughly the 60% of the data. For example, an acquisition of 10 seconds only gets me less than 600 samples (per each variable) instead of 1000. Moreover, I tested the same sending only one variable through the UART (i.e. only a single float, 4 bytes) and this results in the same behavior!
By the way, I am exploiting the following to send each byte (char) via the UART interface.
void writeCharUART(char c) {
loop_until_bit_is_set(UCSR0A, UDRE0);
UDR0 = c;
}
Even though my ISR runs at 100 Hz (LED blinking seem to confirm that), data loss may occur at the level of the TWI transmission. To prove that, I modified the code of the ISR to send just a normal char (T) instead of data from the MPU and I got a similar behavior. Something like this:
00:10:05.203 -> T
00:10:05.203 -> T
00:10:05.236 -> T
00:10:05.236 -> T
00:10:05.236 -> T
00:10:05.236 -> T
00:10:05.269 -> T
So, I guess there is something wrong with the UART library and I actually sample at 100 Hz, but the logging frequency is much lower (and not constant). How can I solve this issue and/or debug the UART library? Do you see other reasons to justify this issue?
EDIT 1
As pointed out in the comments, it seems to be a problem of the receiving software that limits the frequency to ~30 Hz by some sort of buffering. To confirm that, I programmed the ATmega328P with the following code (this time using the IDE).
void loop() {
Serial.println("T");
}
At first, I thought there was no delay this time, but I could find it after 208 samples. So, there are ~200 samples received at the same timestamp and another bunch of samples after 33 ms. This may be proof that the receiving software introduces this delay.
I also tested a simple serial monitor that I had developed in C and, even though there is no timestamp functionality, I am also loosing samples if I fix the duration of the acquisition sampling at 100 Hz. My serial monitor is based on the termios.h library, but I could not find any documentation about its way of buffering incoming data.
There are two issues here:
You are missing messages. You checked the sample rate just with your eyes and told us that you can still see a very fast blinking. Depending on the colour of your LED, the ambient light, your physical state, and your eyes this could mean anything from 30 Hz to 100 Hz.
I would not trust my eyes to estimate and rather use an oscilloscope or a frequency counter to measure.
You could reduce the frequency of the LED blinking to 1Hz or even lower by dividing in software. Such a low frequency can be measured by hand via a stop watch. For example count 30 blinks and check the time needed for this.
Add a counter to the message and increment it with each message. You will see it right away if you're losing data.
The timestamps seem to indicate that the messages are "clustered" at about 30 Hz.
I'm guessing that the source of the timestamp in running at 30 Hz. So it can not give you more accurate values.
I kind of solved my issues! First of all, thanks to the comments I have checked that my ISR was correctly running at 100 Hz. Doing so, I could be sure that the problem where somewhere else, namely in the UART communication.
I found this very helpful: Linux, serial port, non-buffering mode
Apparently, the Serial Monitor provided by the Arduino IDE uses exploits the termios.h library and uses its default settings. I checked also the user manual and switched to the polling-read mode. Quoting from the user manual
If data is available, read(2) returns immediately, with the lesser of the number of bytes available, or the number of bytes requested. If no data is available, read(2) returns 0.
Hence, I switched back to my serial monitor code and changed the initPort() function adding the following lines of code.
struct termios options;
(...)
options.c_cc[VTIME] = 0;
options.c_cc[VMIN] = 0;
I noticed right away a much higher data frequency in the terminal. I kept the 1 Hz LED blinking in the ISR and there is no period stretching. Moreover, an acquisition of 10 seconds this time gave me roughly 1000 samples per variable, consistent with a sampling rate of 100 Hz.
On the AVR side, I also changed the way I send data through the UART. Before, I was sending 9 floats like this:
sprintf(buffer, "%f, %f, %f", value1_x, value1_y, value1_z);
serial_print(buffer); // no "\n" sent here
sprintf(buffer, "%f, %f, %f", value2_x, value2_y, value2_z);
serial_print(buffer); // again, no "\n" sent
sprintf(buffer, "%f, %f, %f", roll, pitch, yaw);
serial_println(buffer); // "\n" is sent here once the last data byte is sent
Now, I replaced all this with a single call to the function serial_println() and I write only 6 floats to the buffer.

STM32F429 Timer triggered USART DMA transfer issue

This is my first post at this forum.
I am developing a MIDI sequencer device based on a STM32F429DISCOVERY board running at stock 180MHz. In order to send midi messages the USART1 is configured for 31250 bauds and the appropriate DMA is configured to transfer a 3 byte array stored in ram to the USART. I was doing tests of even timing of sending of midi messages, by configuring the Timer 4 update interrupt, within the service routine of which I am enabling the memory-to-peripheralUSART1 DMA operation. This gives me a periodic sending of a 3 byte message over the USART1 peripheral.
Everything works great and with correct frequency and correct data, but i have a small issue which i have been researching for few days now and have not been able to correct. To make things clearer, within the timer interrupt routine I set a led on the discovery (RG13) to momentarily blink and connected 1 channel of an oscilloscope to the led pin. The second channel of the oscilloscope is connected to the USART TX pin. Now, when the code is executed, i can see the led pulse on the oscilloscope's CH1, followed by the USART serial data on the CH2. But for some reason the time between the led pulse and the beginning of the serial data transfer fluctuates with every sending of the data. It increments with every sending, going from around 1uS to around 30uS, and then jumps back to 1.
I noticed that if i slightly change the USART baudrate, the time fluctuation between the pulse and the data sending changes in pattern, going faster or slower and with longer or shorter range.
I have tried resetting all the apropriate flags from USART as well as DMA, have tried to disable/enable the timer, played with interrupt priorities, but nothing has worked to get rid of the time fluctuation.
As you can imagine, the stability of this is crucial for a MIDI sequencer hardware application as it bases the timing of the musical events, which must be rock solid.
I have also tried using the USART by itself without DMA, manually sending every byte, basically same results. Interrupt driven USART TX exhibited likewise results.
The only thing which seemed to work to get rid of the time fluctuation of USART TX response is, before every sending operation to deinitialize USART and the DMA modules and reinitialize them again. This seemed to give a stable operation but inserts a long delay between the timer interrupt and the actual sending of the data over the USART, which is unacceptable.
If anyone has any thoughts on this or have done anything similar, I need an advice on where to look at.
Thanks a lot in advance!
Best regards,
Konstantin
Even based on your detailed description, there are various possibilities for errors, so best I can do is guess:
Maybe just one of the TIM setting is just slightly wrong: What about the timer's auto-reload register (TIM4_ARR)?
The period setting must be just one unit lower than the desired transmission period divided by the (possibly prescaled) clock period (see details upcounting/downcounting spec).
Now, if the reload value were just equal to the value instead, the second trigger would be late by one tiny period, the third trigger twice as much and so on (which may look like what you described).
This "ramp of delays" would then rise until the unwanted delay sums up to one UART bit period (which happens to be 32uS for 31250 bauds, quite near to the "around 30uS" you described). The next trigger would then just fit for the neighbouring UART bit cycle (without much delay).
Comparing this hypothesis with your other findings...
Changing the UART baud rate would preserve the fundamental error, but the duration of the irritating delay changes. It can appear to change its sign ("faster or slower"), depending on the beat characteristics between the (actual) TIM period and the UART bit period. => OK
Changing the event processing from DMA to IRQ handler wouldn't change much about the problem but only the "phase" of the initial delay (by the time the CPU needs to execute a different ST library function). => OK
Disabling and re-enabling the UART might have changed the behaviour because the UART clock might re-synchronize newly with the underlying bus clock (APB2 for USART2), so the delay after the TIM trigger would appear constant, and you wouldn't notice fluctuations. => OK

Relaying UART1 with UART2 within an ISR() (PIC24H)

I am programming a microcontroller of the PIC24H family and using xc16 compiler.
I am relaying U1RX-data to U2TX within main(), but when I try that in an ISR it does not work.
I am sending commands to the U1RX and the ISR() is down below. At U2RX, there are databytes coming in constantly and I want to relay 500 of them with the U1TX. The results of this is that U1TX is relaying the first 4 databytes from U2RX but then re-sending the 4th byte over and over again.
When I copy the for loop below into my main() it all works properly. In the ISR(), its like that U2RX's corresponding FIFObuffer is not clearing when read so the buffer overflows and stops reading further incoming data to U2RX. I would really appreciate if someone could show me how to approach the problem here. The variables tmp and command are globally declared.
void __attribute__((__interrupt__, auto_psv, shadow)) _U1RXInterrupt(void)
{
command = U1RXREG;
if(command=='d'){
for(i=0;i<500;i++){
while(U2STAbits.URXDA==0);
tmp=U2RXREG;
while(U1STAbits.UTXBF==1); //
U1TXREG=tmp;
}
}
}
Edit: I added the first line in the ISR().
Trying to draw an answer from the various comments.
If the main() has nothing else to do, and there are no other interrupts, you might be able to "get away with" patching all 500 chars from one UART to another under interrupt, once the first interrupt has ocurred, and perhaps it would be a useful exercise to get that working.
But that's not how you should use an interrupt. If you have other tasks in main(), and equal or lower priority interrupts, the relatively huge time that this interrupt will take (500 chars at 9600 baud = half a second) will make the processor what is known as "interrupt-bound", that is, the other processes are frozen out.
As your project gains complexity, you won't want to restrict main() to this task, and there is no need to for it be involved at all, after setting up the UARTs and IRQs. After that it can calculate π ad infinitum if you want.
I am a bit perplexed as to your sequence of operations. A command 'd' is received from U1 which tells you to patch 500 chars from U2 to U1.
I suggest one way to tackle this (and there are many) seeing as you really want to use interrupts, is to wait until the command is received from U1 - in main(). You then configure, and enable, interrupts for RXD on U2.
Then the job of the ISR will be to receive data from U2 and transmit it thru U1. If both UARTS have the same clock and the same baud rate, there should not be a synchronisation problem, since a UART is typically buffered internally: once it begins to transmit, the TXD register is available to hold another character, so any stagnation in the ISR should be minimal.
I can't write the actual code for you, since it would be supposed to work, but here is some very pseudo code, and I don't have a PIC handy (or wish to research its operational details).
ISR
has been invoked because U2 has a char RXD
you *might* need to check RXD status as a required sequence to clear the interrupt
read the RXD register, which also might clear the interrupt status
if not, specifically clear the interrupt status
while (U1 TXD busy);
write char to U1
if (chars received == 500)
disable U2 RXD interrupt
return from interrupt
ISR's must be kept lean and mean and the code made hyper-efficient if there is any hope of keeping up with the buffer on a UART. Experiment with the BAUD rate just to find the point at which your code can keep up, to help discover the right heuristic and see how far away you are from achieving your goal.
Success could depend on how fast your micro controller is, as well, and how many tasks it is running. If the microcontroller has a built in UART theoretically you should be able to manage keeping the FIFO from overflowing. On the other hand, if you paired up a UART with an insufficiently-powered micro controller, you might not be able to optimize your way out of the problem.
Besides the suggestion to offload the lower-priority work to the main thread and keep the ISR fast (that someone made in the comments), you will want to carefully look at the timing of all of the lines of code and try every trick in the book to get them to run faster. One expensive instruction can ruin your whole day, so get real creative in finding ways to save time.
EDIT: Another thing to consider - look at the assembly language your C compiler creates. A good compiler should let you inline assembly language instructions to allow you to hyper-optimize for your particular case. Generally in an ISR it would just be a small number of instructions that you have to find and implement.
EDIT 2: A PIC 24 series should be fast enough if you code it right and select a fast oscillator or crystal and run the chip at a good clock rate. Also consider the divisor the UART might be using to achieve its rate vs. the PIC clock rate. It is conceivable (to me) that an even division that could be accomplished internally via shifting would be better than one where math was required.

Why NOP/few extra lines of code/optimization of pointer aliasing helps? [Fujitsu MB90F543 MCU C code]

I am trying to fix an bug found in a mature program for Fujitsu MB90F543. The program works for nearly 10 years so far, but it was discovered, that under some special circumstances it fails to do two things at it's very beginning. One of them is crucial.
After low and high level initialization (ports, pins, peripherials, IRQ handlers) configuration data is read over SPI from EEPROM and status LEDs are turned on for a moment (to turn them a data is send over SPI to a LED driver).
When those special circumstances occur first and only first function invoking just a few EEPROM reads fails and additionally a few of the LEDs that should, don't turn on.
The program is written in C and compiled using Softune v30L32.
Surprisingly it is sufficient to add single __asm(" NOP ") in low level hardware init to make the program work as expected under mentioned circumstances. It is sufficient to turn off 'Control optimization of pointer aliasing' in Optimization settings. Adding just a few lines of code in various places helps too.
I have compared (DIFFed) ASM listings of compiled program for a version with and without __asm(" NOP ") and with both aforementioned optimizer settings and they all look just fine.
The only warning Softune compiler has been printing for years during compilation is as follows:
*** W1372L: The section is placed outside the RAM area or the I/O area (IOXTND)
I do realize it's rather general question, but maybe someone who has a bigger picture will be able to point out possible cause.
Have you got an idea what may cause such a weird behaviour? How to locate the bug and fix it?
During the initialization a few long (about 20ms) delay loops are used. They don't help although they were increased from about 2ms, yet single NOP in any line of the hardware initialization function and even before or after the function helps.
Both the wait loops works. I have checked it using an oscilloscope. (I have added LED turn on before and off after).
I have checked timming hypothesis by slowing down SPI clock from 1MHz to 500kHz. It does not change anything. Slowing down to 250kHz makes watchdog resets, as some parts of the code execute too long (>25ms).
One more thing. I have observed that adding local variables in any source file sometimes makes the problem disappear or reappear. The same concerns initializing uninitialized local variables. Adding a few extra lines of a code in any of the files helps or reveals the problem.
void main(void)
{
watchdog_init();
// waiting for power supply to stabilize
wait; // about 45ms
hardware_init();
clear_watchdog();
application_init();
clear_watchdog();
wait; // about 20ms
test_LED();
{...}
}
void hardware_init (void)
{
__asm("NOP"); // how it comes it helps? - it may be in any line of the function
io_init(); // ports initialization
clk_init();
timer_init();
adc_init();
spi_init();
LED_init();
spi_start();
key_driver_init();
can_init();
irq_init(); // set IRQ priorities and global IRQ enable
}
Could be one of many things but two spring to mind.
Timing.
Maybe the wait is not long enough for power to stabilize and not everything is synced to the clock. The NOP gets everything back in sync.
Alignment.
Perhaps the NOP gets your instructions aligned on a 32 or 64 bit boundary expected by the hardware. (we used to do this a lot on mainframe assemblers as IO operations often expected things to be on double word boundarys).
The problem was solved. It was caused by a trivial bug.
EEPROM's nHOLD and nCS signals were not initialized immediately after MCU's reset, but before the first use of the EEPROM. As a result they were 0's, so active.
This means EEPROM was selected, but waiting on hold. Meantime other transfer using SPI started. After 6 out of 8 CLK pulses EEPROM's nHOLD I/O pin was initialized and brought high. EEPROM was no longer on hold so it clocked in last two bits of a data for an other peripheral. Every subsequent operation on the EEPROM found it being having not synchronized CLK and MOSI.
When I have added NOP or anything other the moment of nHOLD 0->1 edge was shifted to happen after the last CLK pulse. Now CLK-MOSI were in sync.
All I have had to do was to initialize all the EEPROM's SPI lines, in
particular nHOLD and nCS right after the MCU reset.

Struggling to implement tickless support for FreeRTOS on an xmega256a3

I'm struggling to get tickless support working for our xmega256a3 port of FreeRTOS. Looking around, trying to understand under the hood better, I was surprised to see the following line in vTaskStepTick():
configASSERT( xTicksToJump <= xNextTaskUnblockTime );
I don't have configASSERT turned on, but I would think that if I did, that would be raising issues regularly. xTicksToJump is a delta time, but xNextTaskUnblockTime, if I read the code correctly, is an absolute tick time? Did I get that wrong?
My sleep function, patterned after the documentation example looks like this:
static uint16_t TickPeriod;
void sleepXmega(portTickType expectedIdleTime)
{
TickPeriod = RTC.PER; // note the period being used for ticking on the RTC so we can restore it when we wake up
cli(); // no interrupts while we put ourselves to bed
SLEEP_CTRL = SLEEP_SMODE_PSAVE_gc | SLEEP_SEN_bm; // enable sleepability
setRTCforSleep(); // reconfigure the RTC for sleeping
while (RTC.STATUS & RTC_SYNCBUSY_bm);
RTC.COMP = expectedIdleTime - 4; // set the RTC.COMP to be a little shorter than our idle time, seems to be about a 4 tick overhead
while (RTC.STATUS & RTC_SYNCBUSY_bm);
sei(); // need the interrupt to wake us
cpu_sleep(); // lights out
cli(); // disable interrupts while we rub the sleep out of our eyes
while (RTC.STATUS & RTC_SYNCBUSY_bm);
SLEEP.CTRL &= (~SLEEP_SEN_bm); // Disable Sleep
vTaskStepTick(RTC.CNT); // note how long we were really asleep for
setRTCforTick(TickPeriod); // repurpose RTC back to its original use for ISR tick generation
sei(); // back to our normal interruptable self
}
If anyone sees an obvious problem there, I would love to hear it. The behavior it demonstrates is kind of interesting. For testing, I'm running a simple task loop that delays 2000ms, and then simply toggles a pin I can watch on my scope. Adding some printf's to my function there, it will do the first one correctly, but after I exit it, it immediately reenters, but with a near 65535 value. Which it dutifully waits out, and then gets the next one correct again, and then wrong (long) again, alternating back and forth.

Resources