I'm having a problem with a simple periodic Clock function running on TI's AM572x EVM (main A15 core). The Clock is set up to use any available timer (Timer_ANY), which I assume is either TIMER2 or TIMER10 as I've seen them associated with A15_0 in a GEL script. When I pause and resume emulation with the XDX200 debugger (with CCS9), I'm seeing the Clock Swi executing many more times than it should, preceded by the Hwi posting the Clock Swi as shown here:
Many Hwi executions following unpause
Many Swi executions following many Hwi executions
I've checked the TIOCP_CFG EMUFREE bits for TIMER2 and TIMER10 and they're set to 0, indicating that the timers should be frozen in emulation mode. However, I'm also always seeing 0 in the TCRR register for both timers, which I understand to be the counter register. This suggests that these timers aren't actually counting at all, and that a different timer is being used for the TI-RTOS Clock, but I'm not sure what timer that would be or how I would configure it to freeze during a debugger pause.
Does anyone have any insight into how to properly freeze TI-RTOS Clocks while debugging?
Related
I working with EFM microcontroller (Silicon Labs)
I need to make a beep every x seconds, when the device in EM3 mode.
I tried so many ways without success.
Please try to help me with code example (I'm HW man, not a SW haha)
Thanks,
Gal.
Refer to page 8 on the datasheet (thanks to #user694733)
EM3 mode description:
still full CPU and RAM retention, as well as Power-on Reset, Pin reset and Brown-out Detection, with a consumption of only 0.6 μA. The low-power ACMP, asynchronous external interrupt, PCNT, and I2C can wake-up the device.
So these are your options. One of these things can wake up the microcontroller. All of them are external inputs. So the microcontroller cannot wake itself up in this mode. This makes sense because all clocks are stopped.
If you had an outside clock connected to the PCNT you could use that to wake it up.
If you want the microcontroller to wake itself up, then you need EM2 mode or less:
In EM2 the high frequency oscillator is turned off, but with the 32.768 kHz
oscillator running, selected low energy peripherals (LCD, RTC, LETIMER,
PCNT, LEUART, I2C, WDOG and ACMP) are still available
In EM2 mode the microcontroller may wake itself up using the RTC (real-time clock), LETIMER (low-energy timer), WDOG (watchdog timer) or PCNT (pulse counter, which can be set to count pulses of the 32.768kHz clock).
The datasheet recommends using the Real-Time Clock or Low Energy Timer (RTC or LETIMER) modules.
... however, if we pay attention, we see the datasheet mentions something called the ULFRCO, Ultra-Low-Frequency RC Oscillator, which runs at approximately 1000 Hz. By searching for the keyword ULFRCO, we see that it does still run in EM3 mode, and it can be used as input for the WDOG. On page 89 we see this listed as a feature of EM3 mode.
So, you may configure the WDOG to reset the system after a few seconds. When the microcontroller resets due to watchdog timeout, it wakes up. You should not be afraid of using a system reset. The RMU_RSTCAUSE allows you to see that the system was reset because of the watchdog timer (not because it was first turned on or the reset pin was used). Memory contents are probably still there, but all peripherals are reset. As long as you can deal with peripherals being reset, you can probably make this work. You might even be able to use a little bit of assembly programming to jump back to exactly the point where the program left off.
I have been reading about interrupts in embedded systems and I came across this.
In Normal Mode, the timer triggers interrupt handlers. These can do
practically any function you want, but they run on the CPU, which
prevents anything else from running at the same time. In CTC mode, you
can also trigger interrupts, but it is also possible to not use
interrupts and still toggle an output pin. Using it this way, the
functionality occurs parallel to the CPU and doesn't interrupt
anything.
So I have the following doubts:
What does it mean by toggling the output pin in CTC mode? Does it mean that the processes are running in parallel? That would imply that both the main loop and interrupt function are running in parallel? I am not sure about this.
Is it safe to assume that a timer counts more in CTC mode as it is resetting the timer register each time it matches with the compare register?
The hardware circuitry that constitutes the timer peripheral within the microcontroller is able to perform a comparison and toggle an output in CTC mode. This logic is performed in hardware, without relying on the CPU to execute software instructions. Therefore, the CTC mode compare and toggle occurs in parallel with whatever the CPU happens to be executing.
I don't understand what you mean by the timer "counts more". More as in more often or faster rate? More as in greater total counts? Regardless, I think the answer is no. The timer counts at the rate of the input clock that is driving it. In CTC mode the timer counts up to the comparison value that you have configured it for.
I am working with a STM8 timer (not my code, but maintaining it) and in it it uses a timer. Apparently the clock is set at 16MHz erfo 0.0625uS. The settings of the timer are ARRH=0x03 ARRL=0x20 therefore (0x0320=800) it resets at 800 (ergo 50us)
PSCR is set at 0 so the timer has the same freq as the micro.
Anyway, when checking this with an oscilloscope, it does not give good readings.
The timer interrupt is called at:
56us , 54uS, 54uS, 52uS, 52uS, 52us, 38us(!!!), 42us(?), 50us, 50us....
curiosly summed up it gives 500uS so it does count as 10 times 50uS
The first 8 times at the timer interrupt some AD conversion is happening so there is the possibility that an AD interrupt is being called in between too.
1) Do you think this is affecting the frequency of the timer?
2) why does it "correct" itself by firing an interrupt at 38uS??
I would appreciate any comment based on your embedded or STM8 experience, since I know precise answers would need to examine the code...
I'm not sure if you still need an answer. I once had the same and searched for a long time... simple solution in my case:
I had an ADC ISR with high jitter. That came from my main loop. In some sub-sub-sub routine the ADC interrupt was temporarily deactivated for a critical section (data transfer between interrupt and main loop context). The effect is exactly what you discribe:
Sometimes the time between two interrupts is longer, because the interupt is pending and waiting for execution until the interrupt is enabled again. The timer is still continuing to run. Timing example:
interrupt is disabled in main loop (or sub routine)
interrupt flag is set by timer -> interrupt pending
interrupt is enabled again -> ISR is executed too late
interrupt is disabled in main loop
interrupt flag is set by timer -> pending
interrupt is enabled again -> ISR is executed much too late
main loop does NOT disable interrupt for some case (maybe by control flow, maybe timing issue)
The next interrupt is executed at the right time which is 50 us after raising the last interrupt, NOT 50 us after calling the last ISR. --> time between ISR calling is shortened.
I hope I could help.
I'm working on an embedded project that's running on an ARM Cortex M3 based microcontroller. Some code provided by our vendor uses a delay function that sets up built-in hardware timer and then spins until the timer expires. Typically this is used to wait between 1 and a couple hundred microseconds. These delays are almost because they are waiting on some register, chip or bus to complete an action and need to wait at least the given number of microseconds. The hardware timer also appears to cost at least 6 microseconds in overhead to setup.
In a multithreaded environment this is a problem because there are N threads but only 1 hardware timer. I could disable interrupts while the timer is being used to prevent context switches and thus race conditions but it seems a bit ugly. I am thinking of replacing the function that uses the hardware timer with a function that uses the ARM CPU Cycle Counter (CCNT). Are there are pitfalls I am missing or other alternatives? Obviously the cycle counter function requires it be tuned to the proper CPU frequency which will never change for our system, but I suppose could be detected at boot programmatically using the hardware timer.
Setup the timer once at startup and let the counter run continuously. When you want to start a delay, read the counter value and remember this start value. Then in the delay loop read the counter value again and loop until the counter value minus the start value is greater than or equal to the requested delay ticks. (If you do the subtraction correctly then rollovers will wash out and you don't need special handling to check for them.)
You could multiplex your timer such that you have a table of when each thread wants to fire off and a function pointer / vector for execution. When the timer interrupt occurs, fire off that thread's interrupt and then set the timer to the next one in the list, minus elapsed time. This is what I see many *nix operating systems do in their kernel code, so there should be code to pull from as example.
A bigger concern is the fact that you are spin locking the thread waiting for the timer. Besides CPU usage, and depending on what OS you have (or if you have an OS) you could easily introduce thread inversion issues or even full on lock ups. It might be better to use thread primitives instead so that any OS can actually sleep your threads and wake them when needed.
How can I estimate the irq latency on ARM processor?
What is the definition for irq latency?
Interrupt Request (irq) latency is the time that takes for interrupt request to travel from source of the interrupt to the point when it will be serviced.
Because there are different interrupts coming from different sources via different paths, obviously their latency is depending on the type of the interrupt. You can find table with very good explanations about latency (both value and causes) for particular interrupts on ARM site
You can find more information about it in ARM9E-S Core Technical Reference Manual:
4.3 Maximum interrupt latency
If the sampled signal is asserted at the same time as a multicycle instruction has started
its second or later cycle of execution, the interrupt exception entry does not start until
the instruction has completed.
The longest LDM instruction is one that loads all of the registers, including the PC.
Counting the first Execute cycle as 1, the LDM takes 16 cycles.
• The last word to be transferred by the LDM is transferred in cycle 17, and the abort
status for the transfer is returned in this cycle.
• If a Data Abort happens, the processor detects this in cycle 18 and prepares for
the Data Abort exception entry in cycle 19.
• Cycles 20 and 21 are the Fetch and Decode stages of the Data Abort entry
respectively.
• During cycle 22, the processor prepares for FIQ entry, issuing Fetch and Decode
cycles in cycles 23 and 24.
• Therefore, the first instruction in the FIQ routine enters the Execute stage of the
pipeline in stage 25, giving a worst-case latency of 24 cycles.
and
Minimum interrupt latency
The minimum latency for FIQ or IRQ is the shortest time the request can be sampled
by the input register (one cycle), plus the exception entry time (three cycles). The first
interrupt instruction enters the Execute pipeline stage four cycles after the interrupt is
asserted
There are three parts to interrupt latency:
The interrupt controller picking up the interrupt itself. Modern processors tend to do this quite quickly, but there is still some time between the device signalling it's pin and the interrupt controller picking it up - even if it's only 1ns, it's time [or whatever the method of signalling interrupts are].
The time until the processor starts executing the interrupt code itself.
The time until the actual code supposed to deal with the interrupt is running - that is, after the processor has figured out which interrupt, and what portion of driver-code or similar should deal with the interrupt.
Normally, the operating system won't have any influence over 1.
The operating system certainly influences 2. For example, an operating system will sometimes disable interrupts [to avoid an interrupt interfering with some critical operation, such as for example modifying something to do with interrupt handling, or when scheduling a new task, or even when executing in an interrupt handler. Some operating systems may disable interrupts for several milliseconds, where a good realtime OS will not have interrupts disabled for more than microseconds at the most.
And of course, the time it takes from the first instruction in the interrupt handler runs, until the actual driver code or similar is running can be quite a few instructions, and the operating system is responsible for all of them.
For real time behaviour, it's often the "worst case" that matters, where in non-real time OS's, the overall execution time is much more important, so if it's quicker to not enable interrupts for a few hundred instructions, because it saves several instructions of "enable interrupts, then disable interrupts", a Linux or Windows type OS may well choose to do so.
Mats and Nemanja give some good information on interrupt latency. There are two is one more issue I would add, to the three given by Mats.
Other simultaneous/near simultaneous interrupts.
OS latency added due to masking interrupts. Edit: This is in Mats answer, just not explained as much.
If a single core is processing interrupts, then when multiple interrupts occur at the same time, usually there is some resolution priority. However, interrupts are often disabled in the interrupt handler unless priority interrupt handling is enabled. So for example, a slow NAND flash IRQ is signaled and running and then an Ethernet interrupt occurs, it may be delayed until the NAND flash IRQ finishes. Of course, if you have priorty interrupts and you are concerned about the NAND flash interrupt, then things can actually be worse, if the Ethernet is given priority.
The second issue is when mainline code clears/sets the interrupt flag. Typically this is done with something like,
mrs r9, cpsr
biceq r9, r9, #PSR_I_BIT
Check arch/arm/include/asm/irqflags.h in the Linux source for many macros used by main line code. A typical sequence is like this,
lock interrupts;
manipulate some flag in struct;
unlock interrupts;
A very large interrupt latency can be introduced if that struct results in a page fault. The interrupts will be masked for the duration of the page fault handler.
The Cortex-A9 has lots of lock free instructions that can prevent this by never masking interrupts; because of better assembler instructions than swp/swpb. This second issue is much like the IRQ latency due to ldm/stm type instructions (these are just the longest instructions to run).
Finally, a lot of the technical discussions will assume zero-wait state RAM. It is likely that the cache will need to be filled and if you know your memory data rate (maybe 2-4 machine cycles), then the worst case code path would multiply by this.
Whether you have SMP interrupt handling, priority interrupts, and lock free main line depends on your kernel configuration and version; these are issues for the OS. Other issues are intrinsic to the CPU/SOC interrupt controller, and to the interrupt code itself.