Where does finite-state machine code belong in µC? - c

I asked this question on EE forum. You guys on StackOverflow know more about coding than we do on EE so maybe you can give more detail information about this :)
When I learned about micrcontrollers, teachers taught me to always end the code with while(1); with no code inside that loop.
This was to be sure that the software get "stuck" to keep interruption working. When I asked them if it was possible to put some code in this infinite loop, they told me it was a bad idea. Knowing that, I now try my best to keep this loop empty.
I now need to implement a finite state machine in a microcontroller. At first view, it seems that that code belong in this loop. That makes coding easier.
Is that a good idea? What are the pros and cons?
This is what I plan to do :
void main(void)
{
// init phase
while(1)
{
switch(current_State)
{
case 1:
if(...)
{
current_State = 2;
}
else(...)
{
current_State = 3;
}
else
current_State = 4;
break;
case 2:
if(...)
{
current_State = 3;
}
else(...)
{
current_State = 1;
}
else
current_State = 5;
break;
}
}
Instead of:
void main(void)
{
// init phase
while(1);
}
And manage the FSM with interrupt

It is like saying return all functions in one place, or other habits. There is one type of design where you might want to do this, one that is purely interrupt/event based. There are products, that go completely the other way, polled and not even driven. And anything in between.
What matters is doing your system engineering, thats it, end of story. Interrupts add complication and risk, they have a higher price than not using them. Automatically making any design interrupt driven is automatically a bad decision, simply means there was no effort put into the design, the requirements the risks, etc.
Ideally you want most of your code in the main loop, you want your interrupts lean and mean in order to keep the latency down for other time critical tasks. Not all MCUs have a complicated interrupt priority system that would allow you to burn a lot of time or have all of your application in handlers. Inputs into your system engineering, may help choose the mcu, but here again you are adding risk.
You have to ask yourself what are the tasks your mcu has to do, what if any latency is there for each task from when an event happens until they have to start responding and until they have to finish, per event/task what if any portion of it can be deferred. Can any be interrupted while doing the task, can there be a gap in time. All the questions you would do for a hardware design, or cpld or fpga design. except you have real parallelism there.
What you are likely to end up with in real world solutions are some portion in interrupt handlers and some portion in the main (infinite) loop. The main loop polling breadcrumbs left by the interrupts and/or directly polling status registers to know what to do during the loop. If/when you get to where you need to be real time you can still use the main super loop, your real time response comes from the possible paths through the loop and the worst case time for any of those paths.
Most of the time you are not going to need to do this much work. Maybe some interrupts, maybe some polling, and a main loop doing some percentage of the work.
As you should know from the EE world if a teacher/other says there is one and only one way to do something and everything else is by definition wrong...Time to find a new teacher and or pretend to drink the kool-aid, pass the class and move on with your life. Also note that the classroom experience is not real world. There are so many things that can go wrong with MCU development, that you are really in a controlled sandbox with ideally only a few variables you can play with so that you dont have spend years to try to get through a few month class. Some percentage of the rules they state in class are to get you through the class and/or to get the teacher through the class, easier to grade papers if you tell folks a function cant be bigger than X or no gotos or whatever. First thing you should do when the class is over or add to your lifetime bucket list, is to question all of these rules. Research and try on your own, fall into the traps and dig out.

When doing embedded programming, one commonly used idiom is to use a "super loop" - an infinite loop that begins after initialization is complete that dispatches the separate components of your program as they need to run. Under this paradigm, you could run the finite state machine within the super loop as you're suggesting, and continue to run the hardware management functions from the interrupt context as it sounds like you're already doing. One of the disadvantages to doing this is that your processor will always be in a high power draw state - since you're always running that loop, the processor can never go to sleep. This would actually also be a problem in any of the code you had written however - even an empty infinite while loop will keep the processor running. The solution to this is usually to end your while loop with a series of instructions to put the processor into a low power state (completely architecture dependent) that will wake it when an interrupt comes through to be processed. If there are things happening in the FSM that are not driven by any interrupts, a normally used approach to keep the processor waking up at periodic intervals is to initialize a timer to interrupt on a regular basis to cause your main loop to continue execution.
One other thing to note, if you were previously executing all of your code from the interrupt context - interrupt service routines (ISRs) really should be as short as possible, because they literally "interrupt" the main execution of the program, which may cause unintended side effects if they take too long. A normal way to handle this is to have handlers in your super loop that are just signalled to by the ISR, so that the bulk of whatever processing that needs to be done is done in the main context when there is time, rather than interrupting a potentially time critical section of your main context.

What should you implement is your choice and debugging easiness of your code.
There are times that it will be right to use the while(1); statement at the end of the code if your uC will handle interrupts completely (ISR). While at some other application the uC will be used with a code inside an infinite loop (called a polling method):
while(1)
{
//code here;
}
And at some other application, you might mix the ISR method with the polling method.
When said 'debugging easiness', using only ISR methods (putting the while(1); statement at the end), will give you hard time debugging your code since when triggering an interrupt event the debugger of choice will not give you a step by step event register reading and following. Also, please note that writing a completely ISR code is not recommended since ISR events should do minimal coding (such as increment a counter, raise/clear a flag, e.g.) and being able to exit swiftly.

It belongs in one thread that executes it in response to input messages from a producer-consumer queue. All the interrupts etc. fire input to the queue and the thread processes them through its FSM serially.
It's the only way I've found to avoid undebuggable messes whilst retaining the low latencty and efficient CPU use of interrupt-driven I/O.
'while(1);' UGH!

Related

Making driver library for a slow module, watchdog friendly

Context
I'm making some libraries to manage internet protocol trough GPRS, some part of this communications (made trough UART) are rather slow (some can take more than 30 seconds) because the module has to connect through GPRS.
First I made a driver library to control the module and manage TCP/IP connections, this library worked whit blocking functions, for example a function like Init_GPRS_connection() could take several seconds to end, I have been made to notice that this is bad practice, cause now I have to implement a watchdog timer and this kind of function is not friendly whit short timeout like watchdogs have (I cannot kick the timer before it expire)
What have I though
I need to rewrite part of my libraries to be watchdog friendly, for this purpose I have tough in this scheme, I need functions that have state machine inside, those will be pulling data acquired trough UART interruptions to advance trough the state machines, so then I can write code like:
GPRS_typef Init_GPRS_connection(){
switch(state){ //state would be a global functions that take the current state of the state machine
.... //here would be all the states of the state machine
case end:
state = 0;
return Done;
}
}
while(Init_GPRS_connection() != Done){
Do_stuff(); //Like kick the Watchdog
}
But I see a few problems whit this solution:
This is a less user-friendly implementation, the user should be careful using this library driver because extra lines of code would be always necessary (kind of defeating the purpose of using functions).
If, for some reason, the module wouldn't answer at some point the code would get stuck in the state machine because the watchdog would be kicked outside this function even though the code got stuck in a loop, this kind of defeat the purpose of using watchdog Timer's
My question
What kind of implementation should I use to make a user and watchdog friendly driver library?, how does other drivers library manage this?
Extra information
All this in the context of embedded systems
I would like to implement the watchdog kicking action outside the driver's functions
Given where you are and assuming you do not what too much upheaval to your project to "do it properly", what you might to is add variable watchdog timeout extension, such that you set a counter that is decremented in a timer interrupt and if the counter is not zero, the watch dog is reset.
That way you are not allowing the timer interrupt to reset the watchdog indefinitely while your main thread is stuck, but you can extend the watchdog immediately before executing any blocking code, essentially setting a timeout for that operation.
So you might have (pseudocode):
static volatile uint8_t wdg_repeat_count = 0 ;
void extendWatchdog( uint8_t repeat ) { wdg_repeat_count = repeat ; }
void timerISR( void )
{
if( wdg_repeat_count > 0 )
{
resetWatchdog() ;
wdg_repeat_count-- ;
}
}
Then you can either:
extendWatchdog( CONNECTION_INIT_WDG_TIMEOUT ) ;
while(Init_GPRS_connection() != Done){
Do_stuff(); //Like kick the Watchdog
}
or continue to use your existing non-state-machine based solution:
extendWatchdog( CONNECTION_INIT_WDG_TIMEOUT ) ;
bool connected = Init_GPRS_connection() ;
if( connected ) ...
The idea is compatible with both what you have and what you propose, it simply allows you to extend the watchdog timeout beyond that dictated by the hardware.
I suggest a uint8_t, because it prevents a lazy developer simply setting a large value and effectively disabling the watchdog protection, and it is likely to be atomic and so shareable between the main and interrupt context.
All that said, it would clearly have been better to design in your integrity infrastructure from the outset at the architectural level rather than trying to bolt it on after the event. For example if you were using an RTOS, you might reset the watchdog in a low priority task that if starved, would cause a watchdog expiry, and that "watchdog task" could be use to monitor the other tasks to ensure they are scheduling as expected.
Without an RTOS you might have a "big-loop" architecture with each "task" implemented as a state-machine. In your example you seem to have missed the point of a state-machine. "initialising connection" should be a single state of a high level state-machine, the internals of that state may itself be a state-machine (hierarchical state machines). So your entire system would be a single master state-machine in the main loop, and the watchdog reset once at each loop iteration. Nothing in any sub-state should block to ensure the loop time is low and deterministic. That is how for example Arduino framework's loop() function should work (when done properly - unfortunately seldom the case in examples). To understand how to implement a real-time deterministic state-machine architecture you couls do worse that look at the work of Miro Samek. The framework described therein is available via his company.
You should make your library non-blocking, but other than that, you should not worry about the watchdog at all. The watchdog management should be left to the user.
To allow the user to do other work while your library is waiting, you can use these approaches:
Provide a function to feed the data into your library (e.g. receive()). The user should call this function when the data is available, for example from the interrupt. As this function can be called from the interrupt, make sure it does not do heavy processing. Typically, you would just buffer the data and process it later (Step 2).
Provide a function, that user calls periodically, that updates the state of your library and does any other housekeeping tasks (like timeout detection). Typically, this function is called run(), process(), tick() or something along these lines. The user would call this function in their main loop or from a dedicated RTOS task.
Provide a way to tell the user the state of your library. You can do it either by some sort of getState() function or using a callback or both. Based on this information, the user can implement their own state machine to do things on connect, disconnect etc.

How do you know when a micro-controller reset?

I am learning embedded systems on the ARM9 processor (SAM9G20). I am more familiar with procedural programming for general purpose. Thus what I am doing is going through the data sheet and learning what registers there are and how to manipulate them.
My question is, how do I know when the computer reset? I know that there is a Reset Controller that manages resets. A register called the Status Register (RSTC_SR) stores the source of the reset. Do I need to keep periodically reading this register?
My solution is to store the number of resets in the FRAM (or start by setting it to 0), once a reset happens, I compare this variable with the register value in my main function. If the register value is higher then obviously it reset. However I am sure there is a more optimized way (perhaps using interrupts). Or is this how its usually done?
You do not need to periodically check, since every time the machine is reset your program will re-start from the beginning.
Simply add checks to the startup code, i.e. early in main(), as needed. If you want to figure out things like how often you reset, then that is more difficult since typically (no experience with SAMs, I'm an STM32 type of guy) on-board timers etc will also reset. Best would be some kind of real-world independent clock, like an RTC that you can poll and save the value of. Please consider if you really need this, though.
A simple solution is to exploit the structure of your code.
Many code bases for embedded take this form:
int main(void)
{
// setup stuff here
while (1)
{
// handle stuff here
}
return 0;
}
You can exploit that the code above while(1) is only run once at startup. You could increment a counter there, and save it in non-volatile storage. That would tell you how many times the microcontroller has reset.
Another example is on Arduino, where the code is structured such that a function called setup() is called once, and a function called loop() is called continuously. With this structure, you could increment the variable in the setup()-function to achieve the same effect.
Whenever your processor starts up, it has by definition come out of reset. What the reset status register does is indicate the source or reason for the reset, such as power-on, watchdog-timer, brown-out, software-instruction, reset-pin etc.
It is not a matter of knowing when your processor has reset - that is implicit by the fact that your code has restarted. It is rather a matter of knowing the cause of the reset.
You need not monitor or read the reset status at all if your application has no need of it, but in some applications perhaps it is a useful diagnostic for example to maintain a count of various reset causes as it may be indicative of the stability of your system software, its power-supply or the behaviour of the operators. Ideally you'd want to log the cause with a timestamp assuming you have an suitable RTC source early enough in your start-up. The timing of resets is often a useful diagnostic where simply counting them may not be.
Any counting of the reset cause should occur early in your code start-up before any interrupts are enabled (because an interrupt may itself cause a reset). This may require you to implement the counters in the start-up code before main() is invoked in cases where the start-up code might enable interrupts - for stdio or filesystem support fro example.
A way to do this is to run the code in debug mode (if you got a debugger for the SAM). After a reset the program counter(PC) points to the address where your code starts.

Why not put task context in interrupt

Here is the story.
Its a safety critical project and needs to run a time critical functional routine in 20KHz. Now the design is to put functional routine in a 20KHz FIQ interrupt, meanwhile safety interrupt also in FIQ. Thats the only two FIQ in system. (Surely there are couples of IRQ enabled in the MCU)
I know that its not good to put task context in interrupt ISR, the proper way of doing this to set mark and run in OS task. But seems current design harm nobody.
The routine takes about 10us (main clock 300MHz), so basically it will not blocks IRQ/FIQ for unacceptable time. It even save time for extra context switch compare with using OS task to run the functional routine. To me, currently it feels like the design is against every principle written on text book in university but can not find a reason to say no to it.
How could I convince myself to move functional routine from ISR to OS? Should I?
Let's recollect your situation:
you are coding a safety critical system
the software architecture isn't specified otherwise you wouldn't ask the question at hand
the system requirements weren't processed correctly otherwise 2) wouldn't be in question
someone told you to "use minimum interrupt if possible in safety critical system"
you want to use the highest priority & non-interruptible code for "just some math work"
Sorry for being a bit harsh but I wouldn't want to use/be in your safety critical system.
For your actual problem:
you have to make sure two things
the code in the FIQ must be deterministic and WCET tested
the registers of the timer must be protected and supervised. Why? An unwanted/erroneous manipulation of the timers registers by a lower safety level code can congest the CPU so much that effectively nothing else but the interrupt is processed.
All this under the assumption that your safe state depends entirely on an external hardware watchdog.
PS: Which are the hazards for users of your system? Annoyance? Injury? Lethal? Are you in a SIL or ASIL context?
The reason to move complex code away from ISR is precisely to avoid lengthy processing in the ISR and thus timing jitter and delayed interrupt servicing resulting from it.
You are stating the your processing is not lengthy so do it in the ISR! Otherwise you are just adding bloat.
20Khz = 50us between interrupts, with 10us of processing time it gives you roughly 20% of CPU time just for this "task", and a jitter of 10us in any other routine that runs in your CPU, it will also sum 10us of processing time for each 40us that any other task will consum, if it is ok for your project, and you keep your total CPU processing time below 70% (which is the common maximum acceptable for critical systems), IMHO it should work without any issue.

infinite While loop without statement

What it does mean? I saw below part of a code in embedded c program.I know this is a infinite loop, but for what purpose this part of a code is using in embedded c.
while(1)
{
}
Thanks..
This construct is used for two different purposes.
When you detect an error condition or the termination of your task you have to put the micro-controller in a definit state. The while(1) { } construct stalls further execution until the (watchdog) reset restarts the micro-controller. As krambo mentions in his comment this can be used to attach a JTAG debugger to examine the state of the micro-controller, variables, registers, and so on.
You can implement all the logic in interrupt handler. The main function performs the initialization and goes sleeping. While the main function can "sleep" the CPU can't. It just loops forever. Some micro-controller supports low-energy modes. This would be an alternative.
All embedded systems need an endless loop, because they must continue to execute for as long as the power is on. It doesn't make any sense for an embedded program to just execute and then return, as that would leave the processor dead and idle. This is likely the sole purpose of that loop.
I would guess your code comes from a bare metal microcontroller application, so you can safely disregard all PC programmer comments about sleeping and multi-threading; for a microcontroller application it doesn't make any sense not to consume 100% of the CPU, since nobody else is using it but you.
If you sleep on an embedded system you put the actual microcontroller hardware to sleep, if it supports it. You do so to save power, not to save CPU cycles.
Some operating systems, like uC/OS, require an idle task to run when no other task is running. This would be at the lowest priority and would be preempted by a timer (scheduler) tick if it ever got a chance to run. The case you describe could be such a task.

What should C program do in idle time when running on Linux?

I've written many C programs for microcontrollers but never one that runs on an OS like linux. How does linux decide how much processing time to give my application? Is there something I need to do when I have idle time to tell the OS to go do something else and come back to me later so that other processes can get time to run as well? Or does the OS just do that automatically?
Edit: Adding More Detail
My c program has a task scheduler. Some tasks run every 100ms, some run every 50 ms and so on. In my main program loop i call ProcessTasks which checks if any tasks are ready to run, if none are ready it calls an idle function. The idle function does nothing but it's there so that I could toggle a GPIO pin and monitor idle time with an O'scope... or something if I so desired. So maybe I should call sched_yield() in this idle function???
How does linux decide how much processing time to give my application
Each scheduler makes up its own mind. Some reward you for not using up your share, some roll dices trying to predict what you'll do etc. In my opinion you can just consider it magic. After we enter the loop, the scheduler magically decides our time is up etc.
Is there something I need to do when I have idle time to tell the OS
to go do something else
You might call sched_yield. I've never called it, nor do I know of any reasons why one would want to. The manual does say it could improve performance though.
Or does the OS just do that automatically
It most certainly does. That's why they call it "preemptive" multitasking.
It depends why and how you have "idle time". Any call to a blocking I/O function, waiting on a mutex or sleeping will automatically deschedule your thread and let the OS get on with something else. Only something like a busy loop would be a problem, but that shouldn't appear in your design in any case.
Your program should really only have one central "infinite loop". If there's any chance that the loop body "runs out of work", then it would be best if you could make the loop perform one of the above system functions which would make all the niceness appear automatically. For example, if your central loop is an epoll_wait and all your I/O, timers and signals are handled by epoll, call the function with a timeout of -1 to make it sleep if there's nothing to do. (By contrast, calling it with a timeout of 0 would make it busy-loop – bad!).
The other answers IMO are going into too much detail. The simple thing to do is:
while (1){
if (iHaveWorkToDo()){
doWork();
} else {
sleep(amountOfTimeToWaitBeforeNextCheck);
}
}
Note: this is the simple solution which is useful in a single-threaded application or like your case where you dont have anything to do for a specified amount of time; just to get something decent working. The other thing about this is that sleep will call whatever yield function the os prefers, so in that sense it is better than an os specific yield call.
If you want to go for high performance, you should be waiting on events.
If you have your own events it will be something like follows:
Lock *l;
ConditionVariable *cv;
while (1){
l->acquire();
if (iHaveWorkToDo()){
doWork();
} else {
cv->wait(lock);
}
l->release();
}
In a networking type situation it will be more like:
while (1){
int result = select(fd_max+1, &currentSocketSet, NULL, NULL, NULL);
process_result();
}

Resources