Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
I have small doubt regarding synchronization in the linux kernel i.e., what kind of locking technique is suitable between interrupt context mode and process context to protect the critical region .
Thanks in Advance .....
hiya by definition solutions like: semaphors(or mutex), testandset and ofc spinlocks provide protection of critical code:
it's either by garunteeing an atomic operation (means locking for example will take exactly 1 operation to complete and aquire the lock, protocols such as the bakery protocol and disable preeamption of the proccess (and that's what you want) -once locked no one else can enter that critical code(let's say you used shared memory or something like that) so even if there IS a context switch and two threads run together we're Promised that only one can access that code, the thing is it's assumed all the threads that use that memory or w/e and have a critical region have the same type of lock aquire
for more info on spinlock (which disable peeamption of the CPU)
refer to this: http://www.linuxinternals.org/blog/2014/05/07/spinlock-implementation-in-linux-kernel/
notice that spinlock does a "busy wait" - means while the preeamption is enabled and you didn't aquire the lock the cpu "Wasting" calculation time on running in useless loop
you can also use the irq\preempt commands directly but that's pretty dangerous
eg:
preempt_enable() decrement the preempt counter
preempt_disable() increment the preempt counter
preempt_enable_no_resched() decrement, but do not immediately preempt
preempt_check_resched() if needed, reschedule
preempt_count() return the preempt counter
since i don't really know what you're trying to achieve it's kinda hard to go specific and answer your needs but i really dig sleepy semaphores:
http://www.makelinux.net/books/lkd2/ch09lev1sec4
unlike the rest of the solution i've offered they won't do busy wait which is saving calculation time.
i really hope i helped in this... gl!
Related
Closed. This question needs debugging details. It is not currently accepting answers.
Edit the question to include desired behavior, a specific problem or error, and the shortest code necessary to reproduce the problem. This will help others answer the question.
Closed 3 years ago.
Improve this question
Testing a code that uses interrupts, I forced with a while(1); to try to remain in the interrupt handler, but I saw that the interrupt handler leaves and returns to main, so I assume it has some kind of timeout. If so, is it particular to the ISR, or it is a standard feature of interrupts?
It is not a standard feature of interrupts. You don't say what platform you are using, but in general, interrupt handlers should do only what is necessary, and return. The less time spent in the handler, the better.
It is possible your program has a watchdog timer somewhere, and when starved for processing time (because the ISR hung), the timer fires and is designed to exit your program.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 3 years ago.
Improve this question
I've already used internal hardware watchdogs in several OS-less embedded applications (with static schedulers).
What I do is:
I look for the slowest periodic and lowest priority task
I set the watchdog timeout to something more than the period of the slowest task
then I kick the dog at the beginning of the slowest task
I think this is a minimalist but safe approach.
Is there any best practice? (personal experience or verified sources)
I've heard/seen people doing different things like kicking the dog more than once in different tasks, or kicking only if all tasks have been called within a timeout,...
Your approach has the problem that you can't guarantee by running the slowest task that all other task have run.
And as an extension in a multitasking environment you usually end up with some high priority task which are needed to ensure the functionality and other tasks (IO, hw-monitoring, etc) about which you don't really care.
So your watchdog is only needed for the important but you have to observe them all. to ensure that you need as very simple solution a running state structure like that:
struct{
bool task1HaRun;
bool task2HasRun;
bool task3HasRun;
};
with a mutex around it. Each tasks sets its own hasRunFlag and checks if all others are also set. If all others are set it resets all and triggers the watch-dogs. If you don't let every task check for itself you may miss blocked tasks.
There are more elegant ways for that problem but that one is portable and gives you an idea what to do.
Your question is a bit subjective, but there is something of an industry de facto standard for real-time applications, which goes like this:
Specify a maximum allowed response time of the system. As in, the longest time period that some task is allowed to take. Take ISRs etc in account. For example 1ms.
Set the dog to a time slightly longer than the specified response time.
Kick the dog from one single place in the whole program, preferably from the main() loop or similar suitable location (RTOS has no standard for where to do this, AFAIK).
This is the toughest requirement - ideally the dog doesn't know anything about your various tasks but is kept away from application logic.
In practice this might be hard to do for some systems - suppose you have flash bootloaders and similar which by their nature simply must take long time. Then you might have to do dirty stuff like placing watchdog kicks inside a specific driver. But it is a best practice to strive for.
So ideally you have this at the very top level of your application:
void main (void)
{
/* init stuff */
for(;;)
{
kick_dog();
result = execute();
error_handler(result);
}
}
As a side-effect of this policy, it eliminates the risk of having "talented" people end up kicking the dog from inside a ISR.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 5 years ago.
Improve this question
I'm trying to understand how Interrupts are handled by the system and how does it work if there is a DMA integrated in the system.
I will express what I understood until now, and I would like to have some feedback if I'm right or not.
In order for the system to catch I/O actions performed by some device, the system uses what's called Interrupts.
The system sets up interrupts for given actions (we're interested in, for example typing on a keyboard), and once the action is performed the system catches it.
Now I have some doubts, once we catch an Interrupt what happens in the background? What are the overheads? What has does the CPU needs to set up? Is there a context switch? How does the interrupt handler works?
The CPU has to do some work in order to handle the interrupt, does it read the registers and writes the "message" in the memory, in order for the user to see it?
If we have a DMA, instead, once the CPU catches the Interrupt it doesn't need to handle the memory access for the device, thus it can perform other thing until the DMA interrupts the CPU telling him that the transfer it completed and that the CPU can safely close the handling?
As you can see there is some stuff I need to clarify. I would really appreciate your help. I know that an answer to all those questions could be written in one book, but all I need is to know how the things are connected, to get an intuition on what's going on behind the scenes in order to reason more easily about it.
Interrupts are handled by something called Interrupt Service Routines (ISRs). These are functions implemented by the kernel and registered with the hardware. Each type of an interrupt is registered with a separate handler.
When a hardware receives an interrupt, it halts the execution of any running process (on that processor), pushes the state of the process (registers, flags, segments) on the stack and executes the ISR.
Apart from saving the context, the hardware also does one more important thing. It changes the processor context to Privileged mode (lower ring). This is of course if the processor is already not in Ring 0 and if Privileged operations are required by the ISR. There is a flag in the Interrupt Descriptor Table (IDT) which tells the processor whether it is a user mode exception or a privileged mode exception.
Since these ISRs are written by the kernel they are trusted. These ISRs perform whatever is required for example in case of a keyboard interrupt, it moves the byte reads into the input stream of the foreground process.
After the ISR is done, (signaled by an iret instruction on X86), the state of the program is popped off and the execution of the process continues.
Yes, this can be thought of a context switch, but it really isn't since other process is not loaded. It can be just thought of as a halt till a more important job is done.
While this has some overhead, it is not much in case of frequent interrupts like keyboards interrupts (the ISRs are very small) and also these interrupts are very infrequent.
But say there is a hardware that does jobs are very regular interval. Like disk read/write or network card. In this case, interrupting again and again would be very costly.
So what we use is DMA (direct memory access). The processor allocates some physical memory to these hardware. They can access this part of the RAM without halting the process, since the processor's intervention is not required.
They keep doing all the IO they need to, but in the end when the job is done (or if it fails), they signal the processor with a single interrupt.
Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 5 years ago.
Improve this question
pseudo code for what I want to accomplish:
//gets current running digital singal processor
int dsp_id = get_dsp_id();
if (dsp_id == 0) {
//code run only once
//irq start all other dsps including dsp_id 0
} else {
//code run multiple times
}
problem is when i send start irq to all dsps including id 0 i get in the if statetment over and over, i tried to flag it with a global static bool but that did not work.
You have a race condition. I imagine that the other threads that you kick off hit the if statement before your global variable is set. You need to protect the lock with a mutex. In pseudo code this would be something like
if (dsp_id == 0) {
get mutex lock
if (!alreadyRun)
{
//code run only once
//irq start all other dsps including dsp_id 0
set alreadyRun to true
}
release mutex lock
} else {
//code run multiple times
}
where alreadyRun is your boolean variable. You cannot, by the way, just write alreadyRun = true because there is no guarantee that other processors will see the change if the cache of the processor setting it has not been flushed back to main memory. Your threading library will have appropriate functions to do the mutex locking and safely set alreadyRun. For example, C11 defines atomic types and operations in stdatomic.h for your flag and mutex functions in threads.h
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 7 years ago.
Improve this question
I want to implement something in an ARM Cortex-M3 processor (with NVIC). I have limited knowledge about embedded systems, but I know, that an ISR routine should be as simple as possible.
Now I have the following problem: I have an interrupt routine which invokes when a CAN message is received. In one case I have to calculate some time-consuming task with the CAN message which task cannot be interrupted by another CAN message, but other interrupts can be served during this operation. My idea is the following:
CAN message received, ISR started, do some simple jobs (e.g. setting flags)
Disable CAN interrupts in the CAN ISR routine. Also save the CAN message into a global variable so it can be reached in the main loop.
In the main loop, do the time consuming task.
After this task, enable the CAN interrupt handling again.
Is it a good (or not totally bad) idea or should I do this in another way?
It is generally not a good idea to disable all (CAN) interrupts. It seems that what you want to protect yourself against is the same message arriving a second time, before you are done serving the first.
In that case you should take advantage of the ISR itself being non-interruptable. You can create a simple semaphore with a bool variable, and since the interrupt that sets it is non-interruptable, you don't even have to worry about atomic access to that boolean. C-like pseudo-code for the can handler:
typedef struct
{
bool busy;
can_msg_t msg;
} special_msg_t;
// must be volatile, to prevent optimizer-related bugs:
volatile static special_msg_t special = { false, {0} };
interrupt void can_isr (void)
{
// non-interruptable ISR, no other interrupt can fire
if(id==SPECIAL && !special.busy)
{
special.busy = true;
// right here you can open up for more interrupts if you want
memcpy(&special.msg, &new_msg, size);
}
}
result_t do_things_with_special (void) // called from main loop
{
if(!special.busy)
return error; // no new message received, no can do
// do things with the message
special.busy = false; // flag that processing is done
return ok;
}
an ISR routine should be as simple as possible.
Exactly.
There is a Linux kernel concept called bottom half. i.e Keeping ISR simple as much as possible. Rest of interrupt handling operations are deferred to later time.
There are many ways to implement bottom half like task-let, work-queues,etc
Suggest to read the following links
http://www.makelinux.net/ldd3/chp-10-sect-4
http://www.ibm.com/developerworks/library/l-tasklets/
Interrupt disabled for longer time could lead to interrupt missing(Obviously some data loss). After interrupt flag is set, create a deferred work and come out of interrupt context.