I'm trying to start a thread as soon as an interrupt occurs. However, I have realized that I can't start a thread from within an interrupt handler (or any function that is directly or indirectly being called by the interrupt handler). So, what I have decided to do is have the handler assert a flag. Then, a separate thread continously monitors that flag and if it's asserted it will in turn create (and start) a thread. Here's a pseudocode:
int interrupt_flag = 0;
interrupt_handler(void)
{
interrupt_flag = 1
}
monitoring_thread(void) //this thread is started at the start of the program
{
while(1)
{
if(interrupt_flag)
{
interrupt_flag = 0;
//start the thread here
sleep(/*some amount of time*/);
}
}
}
I'm not really happy with having a dedicated while loop constantly monitoring a flag. The problem with this is that it significantly reduces the speed of the other threads in my program. For this reason, I'm calling the sleep function to increase the speed of the other threads in my program.
Question: Is there a way I can truly start a thread upon interrupt, without having a dedicated while loop? Is there a workaround for starting a thread from within an interrupt handler?
If it makes any difference, I'm using the POSIX library.
Thanks,
PS. This question is somewhat related to an earlier question posted here:
Sharing data between master thread and slave thread in interrupt driven environment in C
Instead of having your monitoring thread spin on a flag, it could wait until the interrupt handler provides notification that a thread should be spawned. One way to do this is with a semaphore:
sem_t interrupt_sem;
void interrupt_handler(void)
{
sem_post(&interrupt_sem);
}
void monitoring_thread(void)
{
while(1)
{
sem_wait(&interrupt_sem);
//start the thread here
}
}
Previously, I had a solution based on a condition variable, but it is unlikely your system would operate correctly if the interrupt handler makes blocking calls. It could cause a deadlock or other undefined behaviors, as the variables in the system may not have consistent values at the time the interrupt takes place.
As pointed out in comments by myself and others, your operating system should provide some kind of interface to explicitly wake up a waiting task. In the code above, we are assuming the monitoring thread is always active in the background.
you can use POSIX semaphore too
you can wait a semaphore that initial value is 0 by a thread that will be blocked by wait
and post this semaphore in your signal handle function
then , thread above will be waked up and do things you want(create thread)
Related
I want to write a code to switch between threads every 10 microseconds.
But the problem is in the yield function. I get an interrupt while running the timer handler. So it doesn't finish properly.
This is the code I have for initializing the timer:
signal(SIGALRM, &time_handler);
struct itimerval t1;
t1.it_interval.tv_sec = INTERVAL_SEC;
t1.it_interval.tv_usec = INTERVAL_USEC;
t1.it_value.tv_sec = INTERVAL_SEC;
t1.it_value.tv_usec = INTERVAL_USEC;
setitimer(ITIMER_REAL, &t1, NULL);
And this is the code for the handler function:
void time_handler(int signo)
{
write(STDOUT_FILENO, "interrupt\n", sizeof("interrupt\n"));
green_yield();
}
And this is what I do in the yield function: a queue from which we get the thread to run next. The problem is at any moment before I swap context between threads, I can get an interrupt. Especially because I swap the context at the end of this function.
int green_yield(){
green_t *susp = running ;
// add susp to ready queue
// ===========================
enQueue(ready_queue, susp);
// ===========================
// select the next thread for execution
// ===========================
green_t * next = deQueue(ready_queue);
running = next;
// ===========================
// save current state into susp->context and switch to next->context
// ===========================
swapcontext(susp->context, next->context);
return 0;}
What can I do to make sure that I first complete the yield function and then get the interrupt?
Foreword: Depending on your system hardware, a write() system call into stdout may take longer than 10 us. So, calling this from the SIGALRM handler with a cyclic timer of 10 us may be wrong.
In GLIBC, signal(SIGALRM, time_handler) is equivalent to sigaction() with SA_RESTART flag. SIGALRM signal is blocked during the execution of the handler. So, you will not receive a signal while running the handler. It is implicitly blocked during handler execution and unblocked after it finishes. Since the latter calls green_yield(), you will not get a signal while running inside green_yield().
As getcontext() saves the signal mask with SIGALRM unblocked (as I guess you call it at the beginning of your program when you create the threads), when you swap the context to go from one interrupted threads running the signal handler to the next schedulable thread, the newly running thread:
At 1st scheduling time, returns from its getcontext() (the thread creation point). This restores the signal mask even if the previous thread did not return from the signal handler because the context contains a signal mask with SIGALRM unblocked. When the timer elapses again, SIGALRM will come again to interrupt the newly running thread which will yield the CPU in the signal handler calling swapcontext(). This time the saved context contains a signal mask with a blocked SIGALRM;
At subsequent scheduling time, returns from swapcontext() as it was interrupted by the signal and so was running the end of the signal handler. The context restores a blocked SIGALRM signal but this will be unblocked as part of the execution of the signal handler since its execution restarts from the end of the signal handler.
Even if the preceding is supposed to work, note that when a signal is raised, the system creates a stack frame on the top of the current process stack to make the signal handler appear as a function called by the user program and returning at the interruption point. This frame on the stack must not be corrupted by threads running from any point on the global process stack. The use of sigaltstack() may be considered (see notes below).
What about your thread implementation? They all share the same stack (the process stack). When you create them, they all save their context with getcontext() nearly at the same point in the global process stack. So, when you switch from one thread to another, the newly running thread may screw up the stack frames of the previously running threads... I think this is the point on which you should focus: arrange your threads to make them run with their own global stack zone or with their own stack using something like makecontext(). The manual of the latter provides an example to create several threads of execution with separate stacks.
Side note:
swapcontext() is not part of the allowed function calls in the signal handlers: cf. man 7 signal-safety. So, it is not safe to call it from there. But at the same time, we can see that non-local gotos (i.e. longjmp()) can safely be called from the signal handler. Since swapcontext() looks like a non local goto, it may be safe to call it under the same conditions as longjmp()...
The manual of sigaltstack() provides some tips to use swapcontext() from signal handlers
There are linux kernel threads that do some work every now and then, then either go to sleep or block on a semaphore. They can be in this state for several seconds - quite a long time for a thread.
If threads need to be stopped for some reason, at least if unloading the driver they belong to, I am looking for a way to get them out of sleep or out of the semaphore without waiting the whole sleep time or triggering the semaphore as often as required.
I found and read a lot about this but there are multiple advises and I am still not sure how things work. So if you could shed some light on that.
msleep_interruptible
What is able to interrupt that?
down_interruptible
This semaphore function implies interrupt-ability. Same here, what can interrupt this semaphore?
kthread_stop
It's described as sets kthread_should_stop to true and wakes it... but this function blocks until the sleep time is over (even if using msleep_interruptible) or the semaphore is triggered.
What am I understanding wrong?
Use a signal to unblock - really?
My search found a signal can interrupt the thread. Other hits say a signal is not the best way to operate on threads.
If a signal is the best choice - which signal do I use to unblock the thread but not mess it up too much?
SIGINT is a termination signal - I don't intend to terminate something, just make it go on.
More information
The threads run a loop that checks a termination flag, does some work and then block in a sleep or a semaphore. They are used for
Situation 1.
A producer-consumer scenario that uses semaphores to synchronize producer and consumer. They work perfectly to make threads wait for work and start running on setting the semaphore.
Currently I'm setting a termination flag, then setting the semaphore up. This unblocks the thread which then checks the flag and terminates. This isn't my major problem. Hovever of course I'd like to know about a better way.
Code sample
while (keep_running) {
do_your_work();
down_interruptible(&mysemaphore); // Intention: break out of this
}
Situation 2.
A thread that periodically logs things. This thread sleeps some seconds between doing it's work. After setting the flag this thread terminates at it's next run but this can take several seconds. I want to break the sleep if necessary.
Code sample
while (keep_running) {
do_your_work();
msleep(15000); // Intention: break out of this - msleep_interruptible?
}
This could be a non programming question to all,i did read about the thread synchronization objects such as event and how it is set as signalled or non-signalled state . However i couldn't understand these terms signalled and non-signalled .Each one has expressed in different ways and i'm bit confused.
This link states that
A signaled state indicates a resource is available for a process or thread to use it. A not-signaled state indicates the resource is in use.
I got an power point presentation from an university site which states that
An object that is in the signaled state will not cause a thread that is waiting on the object to block and object that is not in the signaled state will cause any thread that waits on that object to block until the object again becomes signaled.
This third link states this
An event is in signaled state means that it has the capacity to release the threads waiting for this event to be signaled. An event is in non signaled state means that it will not release any thread that is waiting for this particular event.
A simple explanation on this concept with an example would be really helpful.
Ok, your 3 quotes are not incompatible. But let's go a bit down to the implementation:
Every waitable object has a boolean value attached to it, named the signalled state, that is used to wait for that object; if the object is signalled, then the wait functions will not wait for it; if the object is non-signalled, then the wait functions will wait for it.
Now, how does this apply to a particular type of object? That depends on the objects nature and specifically on the semantics associated to waiting for it. Actually, the signalled state is defined in terms of wait condition. the For example (see the docs for details):
A mutex is signalled when it is not owned.
An process/thread is signalled when it has finished.
A semaphore is signalled when its count is greater than 0.
A waitable timer is signalled when it has expired.
You might like better if a mutex were signalled when owned, but actually it is when not owned. That's necessary to make the wait functions do the right thing.
And what about the events? Well, they are somewhat simple objects, you can signal and de-signal them at will, so the signal state has no additional meaning:
signalled: Threads will not wait for it.
non-signalled: Threads will wait for it.
Events also have this SignalPulse and AutoReset things that are a bit peculiar (and IME practically impossible to use right).
Now, let's look at your quotes:
A signaled state indicates a resource is available for a process or thread to use it. A not-signaled state indicates the resource is in use.
Actually, that is an interpretation. Usually there is a resource you are trying to arbitrate, and usually you wait if-and-only-if that resource is in use, so it is making the equivalence between resource-in-use and wait-for-resource. But that's not a technical requiremente, just a usual use-case.
An object that is in the signaled state will not cause a thread that is waiting on the object to block and object that is not in the signaled state will cause any thread that waits on that object to block until the object again becomes signaled.
Correct and to the point!
An event is in signaled state means that it has the capacity to release the threads waiting for this event to be signaled. An event is in non signaled state means that it will not release any thread that is waiting for this particular event.
I find this wording a bit confusing... but it adds nothing over the previous one.
Easy way to think of it: "signalled" = "green light"
Signalled:
If you're driving and you see a green light you don't stop (this is the thread looking at an event, finding it's signalled and carrying on without blocking).
Non-Signalled:
If you see a red light you stop and wait for it to become green and then carry on (safe in the knowledge the other threads all are now non-signalled thus are waiting or will wait at their...red light!)
Well, in fact all these explainations are congruent.
The most simplified (and hence not 100% accurate) explaination of an event is to see an event as kind of a flag service provided by the operating system. A signaled Event can be seen as a set flag, an unsignalled event on the other hand can be seen as an unset flag.
For implementing a producer/consumer thread-system based on flags, you usually do something like the following (note for the sake of simplicity i neglect further synchronization mechanisms):
static volatile int flag = 0;
static volatile char data = 'A';
// Some code to initialize the threads
void producer()
{
while (1)
{
Sleep(1000);
data++;
flag = 1;
}
}
void consumer()
{
while (1)
{
/* Busy wait for the occurence of more data */
while (!flag)
{
// wait for next data
}
flag = 0;
// process data
}
}
Unluckily this would lead to a waste of processor cycles in the busy wait loop or unwanted deferral of execution due to a Sleep call introduced to lower the CPU consumption. Both is unwanted.
In order to avoid such problems with task synchronization, operating systems provide different flag like mechanisms (e.g. Events in Windows). With events, setting and resetting a flag is done by the OS calls SetEvent/ResetEvent. To check for a flag you can use WaitForSingleObject. This call has the power to put a task to sleep until the event is signalled which is optimal in terms of CPU consumption.
This turns the above example into something like this:
static volatile char data = 'A';
static HANDLE newDataEvent = INVALID_HANDLE_VALUE;
// Some code to initialize the threads and the newDataEvent handle
void producer()
{
while (1)
{
Sleep(1000);
data++;
SetEvent(newDataEvent);
}
}
void consumer()
{
while (1)
{
if (WaitForSingleObject(newDataEvent, INFINITE) == WAIT_OBJECT_0)
{
ResetEvent(newDataEvent);
// process data
}
}
}
I don't really agree with other answers. They miss the point:
if signaled property is true => the event happened before now.
if signaled property is false => the event did not happened until now.
Where "signal property is false" equals to "not-signal property is true".
And the three definition all refers to threads but they are not clear because signal definition doesn't come from multi-threading but from low level programming .
Signals comes from interrupts:
"if that signal becomes high(=interrupt) move the execution pointer to this function".
This is the meaning of signal, and it comes from interrupts not from threading. And so, not-signaled means, the signal didn't become high until now.
In threading this become:
"A thread needs that an event is happened to continue. If it's happend before now, it can continue; otherwise it blocks itself and wait for it."
gcc 4.4.3 c89
I have a event loop that runs in a separate thread.
My design is like this below, just sample code to help explain.
I need to somehow wait for the initialization to complete before I can make a call to the get_device_params.
I did put a usleep for 3 seconds just before the call to the get_device_params, but I don't really want to block.
Many thanks for any suggestions,
void* process_events(void *data)
{
switch(event_type)
{
case EVT_INITIALIZED:
/* Device is now initialized */
break;
}
}
int main(void)
{
/* Create and start thread and process incoming events */
process_events();
/* Initialize device */
initialize_device();
/* Get device parameters */
/* However, I cannot run this code until initialization is complete */
get_device_params();
return 0;
}
If this separate thread is a POSIX thread (i.e. you're on a typical UNIX platform), then you can use pthread conditional variables.
You call pthread_cond_wait() in the waiting thread. When the init thread finishes its work, you call pthread_cond_signal(). In my opinion that's a canonical way to wait for initialization in another thread.
I need to somehow wait for the initialization to complete before I can make a call to the get_device_params.
Since you apparently have some sort of a FSM inside the process_events(), and it why ever runs in a separate thread, you shouldn't do anything from the main thread with the device.
In other words, logically, call to the get_device_params(); should be placed inside the FSM, on the event that the device is initialized EVT_INITIALIZED which I presume is triggered by the initialize_device().
Alternatively, you can create second FSM (possibly in another thread) and let the process_events() (the first FSM) after it has finished its own processing, forward the EVT_INITIALIZED event to the second FSM. (Or initialize_device() could send the event to the both FSMs simultaneously.)
To me it seems (from the scarce code you have posted) that your problem is that you try to mix sequential code with an event based one. Rule of thumb: in event/FSM based application all code should run inside the FSM, being triggered by an event; there should be no code which may run on its own outside of the FSM.
If it were me, I would probably use a barrier. In main you can call pthread_barrier_init, indicating that you have 2 threads. Then, in main call pthread_barrier_wait, to wait on the barrier you initialized, after calling your device initialization function. Finally, in the device thread, after you initialize your device, you can call pthread_barrier_wait on the same barrier and when both threads are waiting, the barrier will have been satisfied, so both threads will continue. I find barriers easier to use than condition variables sometime, but I'm sure that's an issue of preference.
I am writing a basic user level thread library. The function prototype for thread creation is
thr_create (start_func_pointer,arg)
{
make_context(context_1,start_func)
}
start_func will be user defined and can change depending on user/program
once after creation of thread, if I start executing it using
swapcontext(context_1,context_2)
the function start_func would start running. Now , if a signal comes in , I need to handle it. Unfortunately, I just have the handle to start_func so I cant really define signal action inside the start_func
is there a way I can add a signal handling structure inside the start_function and point it to my code. something like this
thr_create (start_func_pointer,arg)
{
start_func.add_signal_hanlding_Structure = my_signal_handler();
make_context(context_1,start_func)
}
Does anybody know how posix does it ?
If you are talking about catching real signals from the actual operating system you are running on I believe that you are going to have to do this application wide and then pass the signals on down into each thread (more on this later). The problem with this is that it gets complicated if two (or more) of your threads are trying to use alarm which uses SIGALRM -- when the real signal happens you can catch it, but then who do you deliver it to (one or all of the threads?).
If you are talking about sending and catching signals just among the threads within a program using your library then sending a signal to a thread would cause it to be marked ready to run, even if it were waiting on something else previously, and then any signal handling functionality would be called from your thread resume code. If I remember from your previous questions you had a function called thread_yield which was called to allow the next thread to run. If this is the case then thread_yield needs to check a list of pending signals and preform their actions before returning to where ever thread_yield was called (unless one of the signal handlers involved killing the current thread, in which case you have to do something different).
As far as how to implement registering of signal handlers, in POSIX that is done by system calls made by the main function (either directly or indirectly). So you could have:
static int foo_flag = 0;
static void foo_handle(int sig) {
foo_flag = 1;
}
int start_func(void * arg) {
thread_sig_register(SIGFOO, foo_handle);
thread_pause();
// this is a function that you could write that would cause the current thread
// to mark itself as not ready to run and then call thread_yield, so that
// thread_pause() will return only after something else (a signal) causes the
// thread to become ready to run again.
if (foo_flag) {
printf("I got SIGFOO\n");
} else {
printf("I don't know what woke me up\n");
}
return 0;
}
Now, from another thread you can send this thread a SIGFOO (which is just a signal I made up for demonstration purposes).
Each of your thread control blocks (or whatever you are calling them) will have to have a signal handler table (or list, or something) and a pending signal list or a way to mark the signals as pending. The pending signals will be examined (possibly in some priority based order) and the handler action is done for each pending signal before returning to that threads normal code.