Embedded system Can we use any function inside ISR? - arm

I stumbled upon a this embedded system question, can we call an function inside an ISR ?
Working ARM Cortex M4, Have called function many times from ISR without any fault.
I assume behavior will be same for other micro controller as well or am i wrong ?
Note : Please ignore calling an function in ISR would increase my ISR time in turn increasing the interrupt latency.

Generally, there is nothing stopping you from calling a function from an ISR. There are however some things to consider.
First of all, you should keep ISRs as short as possible. Even the function call overhead might be considered too much in some cases. So if you call functions from inside an ISR, it might be wise to inline those functions.
You must also ensure that the called function is either re-entrant or that it isn't called by other parts of the code except the ISR. If a non re-entrant function is called by the main program and your ISR both, then you'll get severe but subtle "race condition" bugs. (Just as you will if the main program and the ISR modify the same shared variable non-atomically, without semaphore guards.)
And finally, designing a system with interrupts, where you don't know if there are other interrupts in the system, is completely unprofessional. You must always consider the program's interrupt situation as whole when designing the individual interrupts. Otherwise the program will have non-existent real-time performance and no programmer involved in the project will actually know what the program is doing. And from the point where nobody knows what they are doing, bugs are guaranteed to follow.

Some RTOS will enforce a policy of which of its macros can or can't be called from an ISR context, i.e. functions that will block on some shared resource. For example:
http://www.freertos.org/a00122.html

Related

spin_lock_irqsave() in interrupt context

I'm maintaining a driver which shares some resource between the ISR (i.e., in interrupt context) and the read() syscall. In both cases, spin_lock_irqsave() is used, since (obviously) the resource can be acquired in the interrupt context.
However, I was wondering if using spin_lock_irqsave() is necessary in the interrupt context. Namely, the Unreliable Guide to Locking (see here: https://kernel.readthedocs.io/en/sphinx-samples/kernel-locking.html) states:
Note that the spin_lock_irqsave() will turn off interrupts if they are on, otherwise does nothing (if we are already in an interrupt handler), hence these functions are safe to call from any context.
As a result, is it common practice to use "normal" spin_lock() in the interrupt handler (since the particular interrupt is already disabled) and then call spin_lock_irqsave() in the user context? Alternatively, is the better practice to just use spin_lock_irqsave() everywhere? I'm leaning towards the latter, for two reasons:
As soon as someone sees that a lock is acquired with spin_lock_irqsave(), it's obvious that the lock is intended to be shared with the interrupt context.
As someone maintaining the code, you don't have to ensure whether or not a particular function is going to be called in what context. Said differently, using spin_lock_irqsave() works in any context, so you don't have to ensure that a function is only called in a certain context.
With the above said, I'm wondering what the convention/best practice is for code that resides in kernel space. Is it better to use spin_lock_irqsave() everywhere the lock is acquired, even if you can guarantee that the lock is being acquired from the interrupt context?
See Unreliable Guide To Locking in kernel documentation. There's a table of minimum requirements for locking to synchronize between different contexts which, roughly speaking, can be summarized as:
If one of them is a process: use the ones that are strong enough to disable the other. For example if the other competitor is a softirq, then you need at
least spin_lock_bh, which disables softirq while locking.
Else if one of the them is hard irq: the advent of hard irq is inevitable unless you disable it beforehand, so spin_lock_irq or spin_lock_irqsave, depending on whether the other is hard irq or not.
Otherwise, use spin_lock.
(Of course, those are under assumption that your kernel isn't config as PREEMPT_RT)

What do they mean by scope of the program in c language?

While declaring the volatile keyword, the value of variable make change any moment from outside the scope of the program. What does that meant? Whether it will change outside the scope of main function or outside the scope of globally declared function? What is the perspective in terms of embedded system, if two or more events are performed simultaneously?
volatile was originally intended for stuff like reading from a memory mapped hardware device; each time you read from something like a memory address mapped to a serial port it might have a new value, even if nothing in your program wrote to it. volatile makes it clear that the data there may change at any time, so it should be reread each time, rather than allowing the compiler to optimize it to a single read when it knows your program never changes it. Similar cases can occur even without hardware interference; asynchronous kernel callbacks may write back into user mode memory in a similar way, so reading the value afresh each time is sometimes necessary.
Ab optimizing compiler assumes there is only the context of a single thread of execution. Another context means anything the compiler can't see happening at the same time. So this is hardware actions, interrupt handlers or other threads or processes. Where your code accesses a global (program or file level) variable the optimizer won't assume another context might change or read it unless you tell it by using the volatile qualifier.
Take the case of a hardware register that is memory mapped and you read in a while loop waiting for it to change. Without volatile the compiler only sees your while loop reading the register and if you allow the compiler to optimize the code it will optimize away the multiple reads and never see a change in the register. This is what we normally want the optimizing compiler to do with variables that don't change in a loop.
A similar thing happens to memory mapped hardware registers you write to. If your program never reads from them the compiler could optimize away the write. Again this is what you want an optimizing compiler to do when you are not dealing with a memory location that is used by hardware or another context.
Interrupt handlers and forked threads are treated the same way as hardware. The optimizer doesn't assume they are running at the same time and skip optimizing away a load or store to a shared memory location unless you use volatile.

Canceling a long-running function using an ISR

is there a way of manipulating the stack from a timer ISR? So i can just throw away the highest frame of the stack by forcing a long-running function to exit? (I am aware of loosing the heap-allocated memory in this case)
The target would probably be an ARM CPU.
Best Regards
Looks like you want something like setjmp/longjmp with longjmp called after ISR termination.
It is possible alter ISR return address such a way, that instead of returning to long-running function longjmp will be called with right parameters and long-running function will be aborted to the place where setjmp was called.
Just another solution came in mind. May be it would be easier to restore all the registers (Stack pointer, PC, LR and others) to values they have before long-running functions was called in the ISR stack frame (using assembly). In order to do that you need to save all required values (using assembly) before long-running functions.
I would recommend avoiding a long-running function. While it may work in the short term, as your code grows it could become problematic.
Instead, consider using a state machine, or system of state machines in your master loop, and using your ISR for a flag. This will reduce timing issues and allow you to manage more tasks at once.
That's possible in theory, but probably impossible to do reliably.
You could use GCC builtins __builtin_frame_address and __builtin_return_address to correctly restore the stack and return from the previous function, but it will corrupt the program behavior. The function you forcibly return probably saved some registers on the stack, and needed to restore them before returning. The problem is, there is no way I know of to locate or mimic the restore code. It is certainly located just before the function returns (and you can't even know where this is), but it could be 1, 2, or even 0 instructions. And even if you locate it or mimic it, you can't really hardcode it, because it is likely to change when you change the function.
In conclusion, you may be able if you use some builtins and 2-3 inline assembly instructions, but you need to tailor-hardcode it for the function you want to return, and you have to change it whenever you change the function.
Why cant you just set a flag in your isr that your function will periodically check to see if it needs to exit? The reason I disapprove of the way you are trying to do it is because it is extremely dangerous to "kill" a function when it is in the middle of some operation. Unless you have a way to clean up absolutely everything after it (like when killing a process) there is no other way you can do it safely. It is always better to signal the function through a flag or semaphore of some kind from isr and then let that function cleanup after itself and exit normally.

Calling convention which only allows one instance of a function at a time

Say I have multiple threads and all threads call the same function at approximately the same time.
Is there a calling convention which would only allow one instance of the function at any time? What I mean is that the function called by the second thread would only start after the function called by the first thread had returned.
Or are these calling conventions compiler specific? I don't have a whole lot of experience using them.
(Skip to the bottom if you don't care about the threading mumbo-jumbo)
As mentioned before, this is not a "calling convention" but a general problem of computing: concurrency. And the particular case where two or more threads can enter a shared zone at a time, and have a different outcome, is called a race condition (and also extends to/from electronics, and other areas).
The hard thing about threading is that computing is such a deterministic affair, but when threading gets involved, it adds a degree of uncertainty, which vary per platform/OS.
A one-thread affair would guarantee that it can do all tasks in the same order, always, but when you got multiple threads, and the order depends on how fast they can complete a task, shared other applications wanting to use the CPU, then the underlying hardware affects the results.
There's not much of a "sure fire way to do threading", as there's techniques, tools and libraries to deal with individual cases.
Locking in
The most well known technique is using semaphores (or locks), and the most well known semaphore is the mutex one, which only allows one thread at a time to access a shared space, by having a sort of "flag" that is raised once a thread has entered.
if (locked == NO)
{
locked = YES;
// Do ya' thing
locked = NO;
}
The code above, although it looks like it could work, it would not guarantee against cases where both threads pass the if () and then set the variable (which threads can easily do). So there's hardware support for this kind of operation, that guarantees that only one thread can execute it: The testAndSet operation, that checks and then, if available, sets the variable. (Here's the x86 instruction from the instruction set)
On the same vein of locks and semaphores, there's also the read-write lock, that allows multiple readers and one writer, specially useful for things with low volatility. And there's many other variations, some that limit an X amount of threads and whatnot.
But overall, locks are lame, since they are basically forcing serialisation of multi-threading, where threads actually need to get stuck trying to get a lock (or just testing it and leaving). Kinda defeats the purpose of having multiple threads, doesn't it?
The best solution in terms of threading, is to minimise the amount of shared space that threads need to use, possibly, elmininating it completely. Maybe use rwlocks when volatility is low, try to have "try and leave" kind of threads, that check if the lock is up, and then go away if it isn't, etc.
As my OS teacher once said (in Zen-like fashion): "The best kind of locking is the one you can avoid".
Thread Pools
Now, threading is hard, no way around it, that's why there are patterns to deal with such kind of problems, and the Thread Pool Pattern is a popular one, at least in iOS since the introduction of Grand Central Dispatch (GCD).
Instead of having a bunch of threads running amok and getting enqueued all over the place, let's have a set of threads, waiting for tasks in a "pool", and having queues of things to do, ideally, tasks that shouldn't overlap each other.
Now, the thread pattern doesn't solve the problems discussed before, but it changes the paradigm to make it easier to deal with, mentally. Instead of having to think about "threads that need to execute such and such", you just switch the focus to "tasks that need to be executed" and the matter of which thread is doing it, becomes irrelevant.
Again, pools won't solve all your problems, but it will make them easier to understand. And easier to understand may lead to better solutions.
All the theoretical things above mentioned are implemented already, at POSIX level (semaphore.h, pthreads.h, etc. pthreads has a very nice of r/w locking functions), try reading about them.
(Edit: I thought this thread was about Obj-C, not plain C, edited out all the Foundation and GCD stuff)
Calling convention defines how stack & registers are used to implement function calls. Because each thread has its own stack & registers, synchronising threads and calling convention are separate things.
To prevent multiple threads from executing the same code at the same time, you need a mutex. In your example of a function, you'd typically put the mutex lock and unlock inside the function's code, around the statements you don't want your threads to be executing at the same time.
In general terms: Plain code, including function calls, does not know about threads, the operating system does. By using a mutex you tap into the system that manages the running of threads. More details are just a Google search away.
Note that C11, the new C standard revision, does include multi-threading support. But this does not change the general concept; it simply means that you can use C library functions instead of operating system specific ones.

Threading Implementation

I wanted to know how to implement my own threading library.
What I have is a CPU (PowerPC architecture) and the C Standard Library.
Is there an open source light-weight implementation I can look at?
At its very simplest a thread will need:
Some memory for stack space
Somewhere to store its context (ie. register contents, program counter, stack pointer, etc.)
On top of that you will need to implement a simple "kernel" that will be responsible for the thread switching. And if you're trying to implement pre-emptive threading then you'll also need a periodic source of interrupts. eg. a timer. In this case you can execute your thread switching code in the timer interrupt.
Take a look at the setjmp()/longjmp() routines, and the corresponding jmp_buf structure. This will give you easy access to the stack pointer so that you can assign your own stack space, and will give you a simple way of capturing all of the register contents to provide your thread's context.
Typically the longjmp() function is a wrapper for a return from interrupt instruction, which fits very nicely with having thread scheduling functionality in the timer interrupt. You will need to check the implementation of longjmp() and jmp_buf for your platform though.
Try looking for thread implementations on smaller microprocessors, which typically don't have OS's. eg. Atmel AVR, or Microchip PIC.
For example : discussion on AVRFreaks
For a decent thread library you need:
atomic operations to avoid races (to implement e.g a mutex)
some OS support to do the scheduling and to avoid busy waiting
some OS support to implement context switching
All three leave the scope of what C99 offers you. Atomic operations are introduced in C11, up to now C11 implementations don't seem to be ready, so these are usually implemented in assembler. For the later two, you'd have to rely on your OS.
Maybe you could look at C++ which has threading support. I'd start by picking some of their most useful primitives (for example futures), see how they work, and do a simple implementation.

Resources