Closed. This question needs debugging details. It is not currently accepting answers.
Edit the question to include desired behavior, a specific problem or error, and the shortest code necessary to reproduce the problem. This will help others answer the question.
Closed 3 years ago.
Improve this question
Testing a code that uses interrupts, I forced with a while(1); to try to remain in the interrupt handler, but I saw that the interrupt handler leaves and returns to main, so I assume it has some kind of timeout. If so, is it particular to the ISR, or it is a standard feature of interrupts?
It is not a standard feature of interrupts. You don't say what platform you are using, but in general, interrupt handlers should do only what is necessary, and return. The less time spent in the handler, the better.
It is possible your program has a watchdog timer somewhere, and when starved for processing time (because the ISR hung), the timer fires and is designed to exit your program.
Related
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 3 years ago.
Improve this question
I've already used internal hardware watchdogs in several OS-less embedded applications (with static schedulers).
What I do is:
I look for the slowest periodic and lowest priority task
I set the watchdog timeout to something more than the period of the slowest task
then I kick the dog at the beginning of the slowest task
I think this is a minimalist but safe approach.
Is there any best practice? (personal experience or verified sources)
I've heard/seen people doing different things like kicking the dog more than once in different tasks, or kicking only if all tasks have been called within a timeout,...
Your approach has the problem that you can't guarantee by running the slowest task that all other task have run.
And as an extension in a multitasking environment you usually end up with some high priority task which are needed to ensure the functionality and other tasks (IO, hw-monitoring, etc) about which you don't really care.
So your watchdog is only needed for the important but you have to observe them all. to ensure that you need as very simple solution a running state structure like that:
struct{
bool task1HaRun;
bool task2HasRun;
bool task3HasRun;
};
with a mutex around it. Each tasks sets its own hasRunFlag and checks if all others are also set. If all others are set it resets all and triggers the watch-dogs. If you don't let every task check for itself you may miss blocked tasks.
There are more elegant ways for that problem but that one is portable and gives you an idea what to do.
Your question is a bit subjective, but there is something of an industry de facto standard for real-time applications, which goes like this:
Specify a maximum allowed response time of the system. As in, the longest time period that some task is allowed to take. Take ISRs etc in account. For example 1ms.
Set the dog to a time slightly longer than the specified response time.
Kick the dog from one single place in the whole program, preferably from the main() loop or similar suitable location (RTOS has no standard for where to do this, AFAIK).
This is the toughest requirement - ideally the dog doesn't know anything about your various tasks but is kept away from application logic.
In practice this might be hard to do for some systems - suppose you have flash bootloaders and similar which by their nature simply must take long time. Then you might have to do dirty stuff like placing watchdog kicks inside a specific driver. But it is a best practice to strive for.
So ideally you have this at the very top level of your application:
void main (void)
{
/* init stuff */
for(;;)
{
kick_dog();
result = execute();
error_handler(result);
}
}
As a side-effect of this policy, it eliminates the risk of having "talented" people end up kicking the dog from inside a ISR.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 4 years ago.
Improve this question
I have an Arduino Nano with an ATmega328P. I'm using Atmel Studio to program it in C and I need to have the following program:
I have 5 inputs (PC3-PC7), each should have their separate timer and each drive 2 (one red, one green) LEDs.
Each HIGH-level on an input pin (PC3-PC7) triggers a separate timer, which should be 10 minutes long.
The HIGH-level on the input pins should last over the course of these 10 minutes. If it changes to a LOW-level while running, something happens (LEDs blink, buzzer on).
If the timer has reached the 10-minute mark, something happens (red LED off, green LED on, buzzer on).
I think the time.h library is needed for this, but I have no idea how I could program this. Any help is appreciated.
There is probably no pre-made library for this.
What you need is to program some manner of Hardware Abstraction Layer (HAL) on top of your hardware peripheral timers. One hardware timer/RTC should be enough. Have it trigger with an interrupt every x time units, depending on what pre-scaler clock that is available.
From this timer interrupt, count the individual on-going tasks and see if it is time for them to execute. In pseudo code, it could look something like this:
void timer_ISR (void) // triggers once every x time units
{
// clear interrupt source etc here
// set up timer again, if needed, here
counter++; // some internal counter
// possibly, enable maskable interrupts here. If I remember AVR, this is asm SEI.
for(size_t i=0; i<timers; i++)
{
if(counter == timer[i].counter)
{
timer[i].execute();
timer[i].enable = false;
}
}
}
where execute is a function pointer to a callback function.
The first thing you should do is set up a Timer interrupt. You can read
timer interrupt
After setting the timer, you should use 1 or 2 variables to reach 10 minutes.
Like: timer int should increase Varb1 every time it gets in ISR (interrupt function). Every time Varb1 overflows, Varb2 should increase, and when Varb2 overflows, Varb3 should increase... and so on.
The reason you should have it is that a timer holds only really small times that are in milliseconds or microseconds.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
I have small doubt regarding synchronization in the linux kernel i.e., what kind of locking technique is suitable between interrupt context mode and process context to protect the critical region .
Thanks in Advance .....
hiya by definition solutions like: semaphors(or mutex), testandset and ofc spinlocks provide protection of critical code:
it's either by garunteeing an atomic operation (means locking for example will take exactly 1 operation to complete and aquire the lock, protocols such as the bakery protocol and disable preeamption of the proccess (and that's what you want) -once locked no one else can enter that critical code(let's say you used shared memory or something like that) so even if there IS a context switch and two threads run together we're Promised that only one can access that code, the thing is it's assumed all the threads that use that memory or w/e and have a critical region have the same type of lock aquire
for more info on spinlock (which disable peeamption of the CPU)
refer to this: http://www.linuxinternals.org/blog/2014/05/07/spinlock-implementation-in-linux-kernel/
notice that spinlock does a "busy wait" - means while the preeamption is enabled and you didn't aquire the lock the cpu "Wasting" calculation time on running in useless loop
you can also use the irq\preempt commands directly but that's pretty dangerous
eg:
preempt_enable() decrement the preempt counter
preempt_disable() increment the preempt counter
preempt_enable_no_resched() decrement, but do not immediately preempt
preempt_check_resched() if needed, reschedule
preempt_count() return the preempt counter
since i don't really know what you're trying to achieve it's kinda hard to go specific and answer your needs but i really dig sleepy semaphores:
http://www.makelinux.net/books/lkd2/ch09lev1sec4
unlike the rest of the solution i've offered they won't do busy wait which is saving calculation time.
i really hope i helped in this... gl!
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 7 years ago.
Improve this question
I want to implement something in an ARM Cortex-M3 processor (with NVIC). I have limited knowledge about embedded systems, but I know, that an ISR routine should be as simple as possible.
Now I have the following problem: I have an interrupt routine which invokes when a CAN message is received. In one case I have to calculate some time-consuming task with the CAN message which task cannot be interrupted by another CAN message, but other interrupts can be served during this operation. My idea is the following:
CAN message received, ISR started, do some simple jobs (e.g. setting flags)
Disable CAN interrupts in the CAN ISR routine. Also save the CAN message into a global variable so it can be reached in the main loop.
In the main loop, do the time consuming task.
After this task, enable the CAN interrupt handling again.
Is it a good (or not totally bad) idea or should I do this in another way?
It is generally not a good idea to disable all (CAN) interrupts. It seems that what you want to protect yourself against is the same message arriving a second time, before you are done serving the first.
In that case you should take advantage of the ISR itself being non-interruptable. You can create a simple semaphore with a bool variable, and since the interrupt that sets it is non-interruptable, you don't even have to worry about atomic access to that boolean. C-like pseudo-code for the can handler:
typedef struct
{
bool busy;
can_msg_t msg;
} special_msg_t;
// must be volatile, to prevent optimizer-related bugs:
volatile static special_msg_t special = { false, {0} };
interrupt void can_isr (void)
{
// non-interruptable ISR, no other interrupt can fire
if(id==SPECIAL && !special.busy)
{
special.busy = true;
// right here you can open up for more interrupts if you want
memcpy(&special.msg, &new_msg, size);
}
}
result_t do_things_with_special (void) // called from main loop
{
if(!special.busy)
return error; // no new message received, no can do
// do things with the message
special.busy = false; // flag that processing is done
return ok;
}
an ISR routine should be as simple as possible.
Exactly.
There is a Linux kernel concept called bottom half. i.e Keeping ISR simple as much as possible. Rest of interrupt handling operations are deferred to later time.
There are many ways to implement bottom half like task-let, work-queues,etc
Suggest to read the following links
http://www.makelinux.net/ldd3/chp-10-sect-4
http://www.ibm.com/developerworks/library/l-tasklets/
Interrupt disabled for longer time could lead to interrupt missing(Obviously some data loss). After interrupt flag is set, create a deferred work and come out of interrupt context.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 8 years ago.
Improve this question
I am trying to implement bit-banging i2c to communicate between an atmega128A's GPIO and an SHT21 (I2C bus was used for some other devices). The first task is to send a write sequence to the SHT21. Using an oscilloscope, I can see that the sequence sending out from the atmega has the correct start signal, correct order of bits, correct stop signal, signal levels looks correct. The serial analyzer from the scope reads out correct I2C message: S80W~A . Yet there is not any ACK response from the SHT21. The SDA and SCL are both pulled up to 3.3V via 6.7K resistors.
I really need help finding out what is wrong
Thank you so much.