Can someone explain why we should not call non async functions from signal handlers ? Like the exact sequence of steps that corrupt the programs while calling with such functions.
And, does signals always run on separate stack ? if so is it a separate context or it runs on the context of the signaled thread ?
Finally, in case of a multi-threaded system what happens when signal handler is executed and some other thread is signaled and calls the same signal handler ?
(I am trying to develop deep understanding of signals and its applications)
When a process receives a signal, it is handled in the context of the process. You should only use aync-safe functions or re-entrant functions from inside a signal handler. For instance, you cannot call a malloc() or a printf() within a signal handler. The reason being:
*) Lets assume your process was executing in malloc when you received the signal. So the global heap data structures are in an inconsistent state. Now if you acquire the heap lock from inside your signal handler and make changes you will further render the heap inconsistent.
*) Another possibility is if the heap lock has been acquired by your process when it received the signal, and then you call malloc() from your signal handler, it sees that lock is held and it waits infinitely to acquire the lock (infinitely because the thread that can release the lock will not run till the signal is completely handled).
2) Signals run in the context of the process. As for the signal stack you can look at this SO answer -> Do signal handers have a separate stack?
3) As for getting multiple instances of the same signal you can look at this link -> Signal Handling in UNIX where Rumple Stiltskin answers it well.
I know some Solaris. So I'm using that for details. LWP==Solaris for "thread" as in pthreads.
trap signals like SIGILL, are delivered to the thread that caused the trap. Asynchronous signals are delivered to the first active thread (LWP), or process that is not blocking that signal. A kernel module called aslwp() traverses the process-header table (has associated LWP's) looking for the first likely candidate to receive the asynch signal.
A signal stack lives in the kernel. I'm not sure what/how to answer your signal stack question.
One process may have several pending signals. Is that what you mean?
Each signal destined for a process is held there until the process switches context (or is forced) into the active state. This in part because you generally cannot incur a trap when the process context has been swapped out and the process does nothing cpu-wise. You certainly can incur asynch signals. But the process cannot "do anything" with any signal if it cannot run. So, at this point the kernel swaps the context back to active, and the signal is delivered via aslwp().
Realtime signals behave differently, and I'm letting it stay with that.
Try reading this:
developers.sun.com/solaris/articles/signalprimer.html
Related
Platform is Linux/POSIX.
The signal is sent to a whole process, not a specific thread.
No signal is set to blocked, all default.
The process is multi-thread process.
From what I've googled, a signal may be handled by a random thread.
And when that signal's handler is executing, it's temporarily blocked until handler returns.
QUESTION: Multiple signals of different types reached simultaneously. Do their handler execute simultaneously on multiple thread or all of them go to one randomly picked thread (SUB-QUESTION: in this case a handler could interrupt another handler's execution started previously, so there could be a interrupt stack?) ? Or mixed? For instance there are 3 type of signals received but only 2 thread free (this is actually the first case).
EXAMPLE: SIGHUP, SIGINT, SIGTERM reached almost simultaneously. The program has two available thread to dispatch signal handler execution.
SIDE-QUESTION: If signal handlers run in parallel, I'll have to use mutex to synchronize them properly. Otherwise 'volatile sig_atomic_t' would be enough, right?
Expected: all signals go to one thread (randomly picked) despite of their different signal types, I haven't seen an example of using mutexes and atoms to synchronize signal handlers.
Your understanding is correct - unless a signal was directed
to a specific thread, there's no guarantee which thread will handle a signal.
See POSIX's Signal Generation and Delivery and pthreads(7):
POSIX.1 distinguishes the notions of signals that are directed
to the process as a whole and signals that are directed to
individual threads. According to POSIX.1, a process-directed
signal (sent using kill(2), for example) should be handled by
a single, arbitrarily selected thread within the process.
So it may be delivered & handled by the same thread that's currently handling another signal (in that case, the previous handler may be interrupted by the new signal). Or may be delivered to another signal.
You can block other signals while one is being handled using sa_mask field
of sigaction to avoid a signal handler being interrupted.
SIDE-QUESTION: If signal handlers run in parallel, I'll have to use mutex to synchronize them properly. Otherwise 'volatile sig_atomic_t' would be enough, right?
You almost certainly don't want to use mutex in a signal handler. There are only few functions that can be safely called from a signal handler (you can only call the functions that are async-signal-safe).
See signal-safty for more information.
If you can use volatile sig_atomic_t for whatever the purpose (do you need to co-ordinate execution of different signal handlers?), it should be preferred.
Expected: all signals go to one thread (randomly picked) despite of their different signal types, I haven't seen an example of using mutexes and atoms to synchronize signal handlers.
This is commonly done by blocking signals that you're interested in from main and fetching/handling them in a specific thread. See pthread_sigmask which also has an example on how to implement this.
I'm running a multithreaded application written in C on Linux.
To stop execution I send SIGINT and from the signal handler call a number of cleanup routines and, finally, call exit(0).
Are the other threads still running or may run (context switch) while the handler executes the cleanup routines?
Handling a signal does not cause the suspension of other threads during execution of the signal handler. Moreover, it's generally not safe to call most functions you would need for cleanup (including even exit!) from a signal handler unless you can ensure that it does not interrupt an async-signal-unsafe function.
What you should do is simply store the fact that SIGINT was received in some async-signal-safe manner and have the program act on that condition as part of its normal flow of execution, outside the signal handler. Then you can properly synchronize with other threads (using mutexes, condition variables, etc.) to achieve a proper, safe shutdown. The ideal method is not to even install a signal handler, but instead block all signals and have a dedicated signal-handling thread calling sigwaitinfo in a loop to accept signals.
Yes, a signal is delivered to one thread, chosen in an unspecified way. Only threads that aren't blocking the signal are considered, though; if all threads block the signal, it remains queued up until one thread unblocks it.
(So if you make all threads block the signal, you can use the signal as a deterministic, inter-process synchronization mechanism, e.g. using sigwait.)
I would like to know exactly how the execution of asynchronous signal handlers works on Linux. First, I am unclear as to which thread executes the signal handler. Second, I would like to know the steps that are followed to make the thread execute the signal handler.
On the first matter, I have read two different, seemingly conflicting, explanations:
The Linux Kernel, by Andries Brouwer, ยง5.2 "Receiving signals" states:
When a signal arrives, the process is interrupted, the current registers are saved, and the signal handler is invoked. When the signal handler returns, the interrupted activity is continued.
The StackOverflow question "Dealing With Asynchronous Signals In Multi Threaded Program" leads me to think that Linux's behavior is like SCO Unix's:
When a signal is delivered to a process, if it is being caught, it will be handled by one, and only one, of the threads meeting either of the following conditions:
A thread blocked in a sigwait(2) system call whose argument does include the type of the caught signal.
A thread whose signal mask does not include the type of the caught signal.
Additional considerations:
A thread blocked in sigwait(2) is given preference over a thread not blocking the signal type.
If more than one thread meets these requirements (perhaps two threads are calling sigwait(2)), then one of them will be chosen. This choice is not predictable by application programs.
If no thread is eligible, the signal will remain ``pending'' at the process level until some thread becomes eligible.
Also, "The Linux Signals Handling Model" by Moshe Bar states "Asynchronous signals are delivered to the first thread found not blocking the signal.", which I interpret to mean that the signal is delivered to some thread having its sigmask not including the signal.
Which one is correct?
On the second matter, what happens to the stack and register contents for the selected thread? Suppose the thread-to-run-the-signal-handler T is in the middle of executing a do_stuff() function. Is thread T's stack used directly to execute the signal handler (i.e. the address of the signal trampoline is pushed onto T's stack and control flow goes to the signal handler)? Alternatively, is a separate stack used? How does it work?
These two explanations really aren't contradictory if you take into account the fact that Linux hackers tend to be confused about the difference between a thread and a process, mainly due to the historical mistake of trying to pretend threads could be implemented as processes that share memory. :-)
With that said, explanation #2 is much more detailed, complete, and correct.
As for the stack and register contents, each thread can register its own alternate signal-handling stack, and the process can choose on a per-signal basis which signals will be delivered on alternate signal-handling stacks. The interrupted context (registers, signal mask, etc.) will be saved in a ucontext_t structure on the (possibly alternate) stack for the thread, along with the trampoline return address. Signal handlers installed with the SA_SIGINFO flag are able to examine this ucontext_t structure if they like, but the only portable thing they can do with it is examine (and possibly modify) the saved signal mask. (I'm not sure if modifying it is sanctioned by the standard, but it's very useful because it allows the signal handler to atomically replace the interrupted code's signal mask upon return, for instance to leave the signal blocked so it can't happen again.)
Source #1 (Andries Brouwer) is correct for a single-threaded process. Source #2 (SCO Unix) is wrong for Linux, because Linux does not prefer threads in sigwait(2). Moshe Bar is correct about the first available thread.
Which thread gets the signal? Linux's manual pages are a good reference. A process uses clone(2) with CLONE_THREAD to create multiple threads. These threads belong to a "thread group" and share a single process ID. The manual for clone(2) says,
Signals may be sent to a thread group as a whole (i.e., a
TGID) using kill(2), or to a specific thread (i.e., TID) using
tgkill(2).
Signal dispositions and actions are process-wide: if an
unhandled signal is delivered to a thread, then it will affect
(terminate, stop, continue, be ignored in) all members of the
thread group.
Each thread has its own signal mask, as set by sigprocmask(2),
but signals can be pending either: for the whole process
(i.e., deliverable to any member of the thread group), when
sent with kill(2); or for an individual thread, when sent with
tgkill(2). A call to sigpending(2) returns a signal set that
is the union of the signals pending for the whole process and
the signals that are pending for the calling thread.
If kill(2) is used to send a signal to a thread group, and the
thread group has installed a handler for the signal, then the
handler will be invoked in exactly one, arbitrarily selected
member of the thread group that has not blocked the signal.
If multiple threads in a group are waiting to accept the same
signal using sigwaitinfo(2), the kernel will arbitrarily
select one of these threads to receive a signal sent using
kill(2).
Linux is not SCO Unix, because Linux might give the signal to any thread, even if some threads are waiting for a signal (with sigwaitinfo, sigtimedwait, or sigwait) and some threads are not. The manual for sigwaitinfo(2) warns,
In normal usage, the calling program blocks the signals in set via a
prior call to sigprocmask(2) (so that the default disposition for
these signals does not occur if they become pending between
successive calls to sigwaitinfo() or sigtimedwait()) and does not
establish handlers for these signals. In a multithreaded program,
the signal should be blocked in all threads, in order to prevent the
signal being treated according to its default disposition in a thread
other than the one calling sigwaitinfo() or sigtimedwait()).
The code to pick a thread for the signal lives in linux/kernel/signal.c (the link points to GitHub's mirror). See the functions wants_signal() and completes_signal(). The code picks the first available thread for the signal. An available thread is one that doesn't block the signal and has no other signals in its queue. The code happens to check the main thread first, then it checks the other threads in some order unknown to me. If no thread is available, then the signal is stuck until some thread unblocks the signal or empties its queue.
What happens when a thread gets the signal? If there is a signal handler, then the kernel causes the thread to call the handler. Most handlers run on the thread's stack. A handler can run on an alternate stack if the process uses sigaltstack(2) to provide the stack, and sigaction(2) with SA_ONSTACK to set the handler. The kernel pushes some things onto the chosen stack, and sets some of the thread's registers.
To run the handler, the thread must be running in userspace. If the thread is running in the kernel (perhaps for a system call or a page fault), then it does not run the handler until it goes to userspace. The kernel can interrupt some system calls, so the thread runs the handler now, without waiting for the system call to finish.
The signal handler is a C function, so the kernel obeys the architecture's convention for calling C functions. Each architecture, like arm, i386, powerpc, or sparc, has its own convention. For powerpc, to call handler(signum), the kernel sets the register r3 to signum. The kernel also sets the handler's return address to the signal trampoline. The return address goes on the stack or in a register by convention.
The kernel puts one signal trampoline in each process. This trampoline calls sigreturn(2) to restore the thread. In the kernel, sigreturn(2) reads some information (like saved registers) from the stack. The kernel had pushed this information on the stack before calling the handler. If there was an interrupted system call, the kernel might restart the call (only if the handler used SA_RESTART), or fail the call with EINTR, or return a short read or write.
Manual has said that setitimer is shared in the whole PROCESS and the SIGPROF is send to the PROCESS not to the thread.
But when I create the timer in my multithread PROCESS, unless I create independent stacks for every thread in the PROCESS to handler the signo, I will got some very serious errors in the sig handler. Through some debugging, I confirm that the stack(sole stack case) must have been reenterd.
So now I suspect that SIGPROFs may be send to multithread at the same time? Thanks!
I don't follow the details of your question but the general case is:
A signal may be generated (and thus pending) for a process as a whole (e.g., when sent using kill(2)) or for a specific thread (e.g., certain signals, such as SIGSEGV and SIGFPE, generated as a consequence of executing a specific machine-language instruction are thread directed, as are signals targeted at a specific thread using pthread_kill(3)). A process-directed signal may be delivered to any one of the threads that does not currently have the signal blocked. If more than one of the threads has the signal unblocked, then the kernel chooses an arbitrary thread to which to deliver the signal.
man (7) signal
You can block the signal for specific threads with pthread_sigmask and by elimination direct it to the thread you want to handle it.
According to POSIX, the alternate signal stack established with sigaltstack is per-thread, and is not inherited by new threads. However, I believe some versions of Linux and/or userspace pthread library code (at least old kernels with LinuxThreads and maybe some versions with NPTL too?) have a bug where the alternate stack is inherited, and of course that will lead to crashing whenever you use the alternate stack. Is there a reason you need alternate stacks? Normally the only purpose is to handle stack overflows semi-gracefully (allowing yourself some stack place to catch SIGSEGV and save any unsaved data before exiting). I would just disable it.
Alternatively, use pthread_sigmask to block SIGPROF in all threads but the main one. Note that, to avoid a nasty race condition here, you need to block it in the main thread before calling pthread_create so that the new thread starts with it blocked, and unblock it after pthread_create returns.
Without keeping a list of current threads, I'm trying to see that a realtime signal gets delivered to all threads in my process. My idea is to go about it like this:
Initially the signal handler is installed and the signal is unblocked in all threads.
When one thread wants to send the 'broadcast' signal, it acquires a mutex and sets a global flag that the broadcast is taking place.
The sender blocks the signal (using pthread_sigmask) for itself, and enters a loop repeatedly calling raise(sig) until sigpending indicates that the signal is pending (there were no threads remaining with the signal blocked).
As threads receive the signal, they act on it but wait in the signal handler for the broadcast flag to be cleared, so that the signal will remain masked.
The sender finishes the loop by unblocking the signal (in order to get its own delivery).
When the sender handles its own signal, it clears the global flag so that all the other threads can continue with their business.
The problem I'm running into is that pthread_sigmask is not being respected. Everything works right if I run the test program under strace (presumably due to different scheduling timing), but as soon as I run it alone, the sender receives its own signal (despite having blocked it..?) and none of the other threads ever get scheduled.
Any ideas what might be wrong? I've tried using sigqueue instead of raise, probing the signal mask, adding sleep all over the place to make sure the threads are patiently waiting for their signals, etc. and now I'm at a loss.
Edit: Thanks to psmears' answer, I think I understand the problem. Here's a potential solution. Feedback would be great:
At any given time, I can know the number of threads running, and I can prevent all thread creation and exiting during the broadcast signal if I need to.
The thread that wants to do the broadcast signal acquires a lock (so no other thread can do it at the same time), then blocks the signal for itself, and sends num_threads signals to the process, then unblocks the signal for itself.
The signal handler atomically increments a counter, and each instance of the signal handler waits until that counter is equal to num_threads to return.
The thread that did the broadcast also waits for the counter to reach num_threads, then it releases the lock.
One possible concern is that the signals will not get queued if the kernel is out of memory (Linux seems to have that issue). Do you know if sigqueue reliably informs the caller when it's unable to queue the signal (in which case I would loop until it succeeds), or could signals possibly be silently lost?
Edit 2: It seems to be working now. According to the documentation for sigqueue, it returns EAGAIN if it fails to queue the signal. But for robustness, I decided to just keep calling sigqueue until num_threads-1 signal handlers are running, interleaving calls to sched_yield after I've sent num_threads-1 signals.
There was a race condition at thread creation time, counting new threads, but I solved it with a strange (ab)use of read-write locks. Thread creation is "reading" and the broadcast signal is "writing", so unless there's a thread trying to broadcast, it doesn't create any contention at thread-creation.
raise() sends the signal to the current thread (only), so other threads won't receive it. I suspect that the fact that strace makes things work is a bug in strace (due to the way it works it ends up intercepting all signals sent to the process and re-raising them, so it may be re-raising them in the wrong way...).
You can probably get round that using kill(getpid(), <signal>) to send the signal to the current process as a whole.
However, another potential issue you might see is that sigpending() can indicate that the signal is pending on the process before all threads have received it - all that means is that there is at least one such signal pending for the process, and no CPU has yet become available to run a thread to deliver it...
Can you describe more details of what you're aiming to achieve? And how portable you want it to be? There's almost certainly a better way of doing it (signals are almost always a major headache, especially when mixed with threads...)
In multithreaded program raise(sig) is equivalent to pthread_kill(pthread_self(), sig).
Try kill(getpid(), sig)
Given that you can apparently lock thread creation and destruction, could you not just have the "broadcasting" thread post the required updates to thread-local-state in a per-thread queue, which each thread checks whenever it goes to use the thread-local-state? If there's outstanding update(s), it first applies them.
You are trying to synchronize a set of threads.
From a design pattern point of view the pthread native solution for your problem would be a pthread barrier.