Embedded RTOS Stop System - c

I'm learning FreeRTOS on a Cortex M0. (Simultaneously, I'm learning the Cortex as well...). I've got plenty of experience with 8bit MCUs.
I'm going through the newbie tutorials on FreeRTOS and I understand setting up basics tasks and the idle daemon.
I realize I don't really understand what the FreeRTOS is doing to manage the underlying timing mechanicals of the kernel. Which leads to one big question...
What is the ideal way to shutdown an RTOS when you want to turn your device off? Not idle the device, but put your MCU into the deepest OFF there is (whatever you want to call it).
It seems trivial, to idle between tasks, but shutting the MCU off and making sure it stays off, and the RTOS kernel doesn't trigger an interrupt or somethign else to wake the MCU back up...?

this is deep sleep mode / power down mode, for an 8-bit MCU this is in the datasheet of ATmega128RFA1 on page 159 ff in http://ww1.microchip.com/downloads/en/DeviceDoc/Atmel-8266-MCU_Wireless-ATmega128RFA1_Datasheet.pdf ( with the wake-up sources ) in this mode all internal timers are disabled
in freeRTOS this is named Tickless Idle Mode, cf https://www.freertos.org/low-power-tickless-rtos.html
Note: If eTaskConfirmSleepModeStatus() returns eNoTasksWaitingTimeout
when it is called from within portSUPPRESS_TICKS_AND_SLEEP() then the
microcontroller can remain in a deep sleep state indefinitely.
eTaskConfirmSleepModeStatus() will only return eNoTasksWaitingTimeout
when the following conditions are true:
Software timers are not being used, so the scheduler is not due to execute a timer callback function at any time in the future.
All the application tasks are either in the Suspended state, or in the Blocked state with an infinite timeout (a timeout value of
portMAX_DELAY), so the scheduler is not due to transition a task out
of the Blocked state at any fixed time in the future.
To avoid race conditions the RTOS scheduler is suspended before
portSUPPRESS_TICKS_AND_SLEEP() is called, and resumed when
portSUPPRESS_TICKS_AND_SLEEP() completes. This ensures application
tasks cannot execute between the microcontroller exiting its low power
state and portSUPPRESS_TICKS_AND_SLEEP() completing its execution.
Further, it is necessary for the portSUPPRESS_TICKS_AND_SLEEP()
function to create a small critical section between the tick source
being stopped and the microcontroller entering the sleep state.
eTaskConfirmSleepModeStatus() should be called from this critical
section.
All GCC, IAR and Keil ARM Cortex-M3 and ARM Cortex-M4 ports now
provide a default portSUPPRESS_TICKS_AND_SLEEP() implementation.
Important information on using the ARM Cortex-M implementation is
provided on the Low Power Features For ARM Cortex-M MCUs page.
so in freeRTOS invoking tickless idle mode is equivalent to deep sleep or power down. possibly you have to manually disable internal timers on the cortex ...
had some problems powering down the ATmega128RFA1 MCU in Contiki OS ...

Related

linked list calling inside interrupts

I am working on a system where I need to achieve (virtually) a real time behavior. I am using non-blocking bare-metal programming and a dsPIC33e microcontroller for this project. Tasks communicate with each other using queues.
I have a low, medium and high priority tasks. High priority task is for example emergency shut down using tactile switch. Low and medium priority task are for communication, sensor reading and there processing respectively. All tasks are checked under RMS (rate monotonic scheduling) and working fine utilizing 60% of processor time.
The question is that I want to call the low and high priority tasks (linked list of modules) inside hardware timer ISR because the dspic33e processor provides hardware based context switching. But its often said that the interrupt routine should be small as possible and often said that use flags and read in main. If I use these flags and then read these flags in main then i don't achieve a preemption behavior.
can anybody suggest/guide me if it's still good to call the linked lists inside the timer routines?

How to prevent system hang before watchdog timer task kicks in

We are using an ARM AM1808 based Embedded System with an rtos and a File System. We are using C language. We have a watchdog timer implemented inside the Application code. So, whenever something goes wrong in the Application code, the watchdog timer takes care of the system.
However, we are experiencing an issue where the system hangs before the watchdog timer task starts. The system hangs because the File System code is badly coded with so many number of while loops. And sometimes due to a bad NAND(or atleast the File System code thinks it is bad) the code hangs in a while loop and never gets out of it. And what we get is a dead board.
So, the point of giving all the information is to ask you guys whether there is any mechanism which could be implemented in the code that runs before the application code? Is there any hardware watchdog? What steps can be taken in order to make sure we don't get a dead board caused by some while loop.
Professional embedded systems are designed like this:
Pick a MCU with power-on-reset interrupt and on-chip watchdog. This is standard on all modern MCUs.
Implement the below steps from inside the reset interrupt vector.
If the MCU memory is simple to setup, such as just setting the stack pointer, then do so the first thing you do out of reset. This enables C programming. You can usually write the reset ISR in C as long as you don't declare any variables - disassemble to make sure that it doesn't touch any RAM memory addresses until those are available.
If the memory setup is complex - there is a MMU setup or similar - C code will have to wait and you'll have to stick to assembler to prevent accidental stacking caused by C code.
Setup the most fundamental registers, such as mode/peripheral routing registers, watchdog and system clock.
Setup the low-voltage detect hardware, if applicable. Hopefully the out-of-reset state for LVD on the MCU is a sound one.
Application-specific, critical registers such as GPIO direction and internal pull resistor registers should be set from here. Many MCU have pins as inputs by default, making them vulnerable. If they are not meant to be inputs in the application, the time they are kept as such out of reset should be minimized, to avoid problems with noise, transients and ESD.
Setup the MMU, if applicable.
Everything else "CRT", such as initialization of .data and .bss.
Call main().
Please note that pre-made startup code for your MCU is not necessarily made by professionals! It is fairly common that there's an amateur-level "CRT" delivered with your toolchain, which fails to setup the watchdog and clock early on. This is of course unacceptable since:
This makes any program running on that platform a notable safety/poor quality hazard, in case the "CRT" will crash/hang for whatever reason.
This makes the initialization of .data and .bss needlessly, painfully slow, as it is then typically executed with the clock running on the default on-chip RC oscillator or similar.
Please note that even industry de facto startup code such as ARM CMSIS fails to do some of the MCU-specific hardware setups mentioned above. This may or may not be a problem.
There is a hardware watchdog that could be run before the application runs. ARM AM1808 does have a timer that could be implemented as a watchdog, as per documentation: www.ti.com/lit/ds/symlink/am1808.pdf. So, you may wish to set it like that at least during the part of the program that runs through the critical and long section. You at wish to have a piece of booting code that first sets this watchdog, and after the correct initialization, goes to application. In fact, this is a very common approach.

ISR vs main: what are the trade offs of running in one or the other?

I know it has to do with time and efficiency, and how ISRs take time away from other processes, but I am unclear why this is. I am always told to keep ISRs very short. I am a bit confused why this is.
Normally, ISRs come into scene when a hardware device needs to interact with the CPU. They send an interrupt signal that makes the CPU to leave whatever it was doing to service the interrupt. That it's what ISR must care about.
Now, this depends on many factors, being the hardware environment and the nature of the interrupt maybe the most relevant ones, but it usually happens that in order to properly service an interrupt, ISRs run with interrupts disabled so they cannot be interrupted. This means that the CPU cannot be shared among other processes while it is running ISR code because the system timer interrupt that is used to run the scheduler (which is the part of the kernel that takes care of making the illusion that the CPU can do several tasks at the same time) won't work.
So, if your ISR takes too much time to perform a certain operation with the device, your system will be affected as a whole, because the percentage of time the CPU is available for the rest of processes will be less than usual. This is much noted on old system with PIO hard disks, which interrupt the CPU for every disk sector they want to transfer to the CPU, and the ISR must do the actual transfer. If there's many disk traffic, you may notice things like your mouse moving jerky (because the interrupt that the mouse device sends to the CPU is not attended)
OSes like Linux allow ISRs to defer time consuming operations with hardware devices to tasklets: sort of kernel threads that can share CPU time with other processes, yet keeping the atomic nature of hardware device operations (the OS ensures that there won't be more than one tasklet function -for the specific tasklet associated to the ISR- running in the system at the same time). The PIO transfer from disk to kernel buffers is an example of such operation.
Some precisions w.r.t. the accepted answer.
Interrupts are not necessarily disabled when running an interrupt, and that is not necessarily the reason why the kernel processes all interrupts before returning to threads.
There is the concept of interrupt priorities. An interrupt of higher priority will preempt a running ISR: if the timer interrupt is of higher priority than the running ISR, it will run. However, a kernel will not handle context switches at this time, but rather defer them until all queued/pending ISRs have run.
Also, on some processors (eg. ARM Cortex-M3), the concept of handling an interrupt is a mode of operation in the processor itself. The processor cannot go back to running threads until it gets out of interrupt mode. Once that happens, all interrupts are fully serviced: you cannot go back to running an ISR.
But the main reason why all ISRs must finish before going back to threads is that kernels do not have the concept of a thread-like running context for ISRs. An ISR thus cannot pend: it must run to completion. An ISR is thus hogging the CPU, except from higher-priority interrupts, until it finishes its purpose.
Usually, the main thread has lower priority than the ISRs. Depending on the scheduler, often the main code will be executed after all pending ISRs have been run.
Having alot of computation intensive code in one or many ISR is generally not advisable, since it may cause delays or even CPU starvation of lower priority ISRs or threads, which may be detrimental if time-critical code needs to be executed.
However, when action needs to be taken immediately at an interrupt event, the fastest way is to execute code from the associated ISR (and possibly assign it a high priority).
If you plan on using several interrupt sources that execute time-consuming code, the way to go is by using an RTOS to allow safe and efficient interleaving of several threads to service each of the interrupts.

Interrupt service routine for watchdog timer on STM32 Discovery

I recently bought a STM32 Value line discovery kit to work with STM32 devices. I'm working on a project now which requires a watchdog. It's called IWDG in STM32. But my problem is that I need an ISR when the watchdog is triggered.
Does anyone know how to implement this (or even have an example)?
You don't want a watchdog, since the whole purpose of the watchdog is to force a reset if the software has hung.
What you're after sounds more like simply a high-priority regular timer interrupt to me.
Set it up so that you restart the timer (pushing the interrupt event generation forwards in time) at regular intervals, so that the interrupt typically doesn't happen.
There are two watchdogs (at least with stm32s10x).
IWDG, which is indepenent and resets stm without isr.
WWDG (window watchdog), which has a isr 1 tick before it will reset the stm32.

What is Rescheduling interrupts (RES)? What causes it? How is it handled in Linux kernel?

What is the difference between "RES: Rescheduling interrupts" and "LOC: Local timer interrupts"? What is responsible to fire the RES interrupt? Is LOC same as the general timer interrupt that is generated by the Timer h/w in the processor?
Also, please give some clarity on what part of the scheduler is invoked during the timer interrupt and the RES interrupt? How it happens in Linux kernel?
Thanks in advance.
Rescheduling interrupts are the Linux kernel's way to notify another CPU-core to schedule a thread.
On SMP systems, this is done by the scheduler to spread the load across multiple CPU-cores.
The scheduler tries to spread processor activity across as many cores as possible. The general rule of thumb is that it is preferable to have as many processes running on all the cores in lower power (lower clock frequencies) rather than have one core really busy running at full speed while other cores are sleeping.
Rescheduling interrupts are implemented using Inter-Processor Interrupts (IPI). For more details checkout this article on Rescheduling Interrupts on Linux.
Local timer interrupts are raised by the APIC for a specific CPU-core. Only that CPU-core receives the interrupts and handles them. For a brief description of its various advantages, checkout this answer.

Resources