Check if pthread is still alive in Linux C - c

I know similar questions have been asked, but I think my situation is little bit different. I need to check if child thread is alive, and if it's not print error message. Child thread is supposed to run all the time. So basically I just need non-block pthread_join and in my case there are no race conditions. Child thread can be killed so I can't set some kind of shared variable from child thread when it completes because it will not be set in this case.
Killing of child thread can be done like this:
kill -9 child_pid
EDIT: alright, this example is wrong but still I'm sure there exists way to kill a specific thread in some way.
EDIT: my motivation for this is to implement another layer of security in my application which requires this check. Even though this check can be bypassed but that is another story.
EDIT: lets say my application is intended as a demo for reverse engineering students. And their task is to hack my application. But I placed some anti-hacking/anti-debugging obstacles in child thread. And I wanted to be sure that this child thread is kept alive. As mentioned in some comments - it's probably not that easy to kill child without messing parent so maybe this check is not necessary. Security checks are present in main thread also but this time I needed to add them in another thread to make main thread responsive.

killed by what and why that thing can't indicate the thread is dead? but even then this sounds fishy
it's almost universally a design error if you need to check if a thread/process is alive - the logic in the code should implicitly handle this.
In your edit it seems you want to do something about a possibility of a thread getting killed by something completely external.
Well, good news. There is no way to do that without bringing the whole process down. All ways of non-voluntary death of a thread kill all threads in the process, apart from cancellation but that can only be triggered by something else in the same process.

The kill(1) command does not send signals to some thread, but to a entire process. Read carefully signal(7) and pthreads(7).
Signals and threads don't mix well together. As a rule of thumb, you don't want to use both.
BTW, using kill -KILL or kill -9 is a mistake. The receiving process don't have the opportunity to handle the SIGKILL signal. You should use SIGTERM ...
If you want to handle SIGTERM in a multi-threaded application, read signal-safety(7) and consider setting some pipe(7) to self (and use poll(2) in some event loop) which the signal handler would write(2). That well-known trick is well explained in Qt documentation. You could also consider the signalfd(2) Linux specific syscall.
If you think of using pthread_kill(3), you probably should not in your case (however, using it with a 0 signal is a valid but crude way to check that the thread exists). Read some Pthread tutorial. Don't forget to pthread_join(3) or pthread_detach(3).
Child thread is supposed to run all the time.
This is the wrong approach. You should know when and how a child thread terminates because you are coding the function passed to pthread_create(3) and you should handle all error cases there and add relevant cleanup code (and perhaps synchronization). So the child thread should run as long as you want it to run and should do appropriate cleanup actions when ending.
Consider also some other inter-process communication mechanism (like socket(7), fifo(7) ...); they are generally more suitable than signals, notably for multi-threaded applications. For example you might design your application as some specialized web or HTTP server (using libonion or some other HTTP server library). You'll then use your web browser, or some HTTP client command (like curl) or HTTP client library like libcurl to drive your multi-threaded application. Or add some RPC ability into your application, perhaps using JSONRPC.
(your putative usage of signals smells very bad and is likely to be some XY problem; consider strongly using something better)
my motivation for this is to implement another layer of security in my application
I don't understand that at all. How can signal and threads add security? I'm guessing you are decreasing the security of your software.
I wanted to be sure that this child thread is kept alive.
You can't be sure, other than by coding well and avoiding bugs (but be aware of Rice's theorem and the Halting Problem: there cannot be any reliable and sound static source code program analysis to check that). If something else (e.g. some other thread, or even bad code in your own one) is e.g. arbitrarily modifying the call stack of your thread, you've got undefined behavior and you can just be very scared.
In practice tools like the gdb debugger, address and thread sanitizers, other compiler instrumentation options, valgrind, can help to find most such bugs, but there is No Silver Bullet.
Maybe you want to take advantage of process isolation, but then you should give up your multi-threading approach, and consider some multi-processing approach. By definition, threads share a lot of resources (notably their virtual address space) with other threads of the same process. So the security checks mentioned in your question don't make much sense. I guess that they are adding more code, but just decrease security (since you'll have more bugs).
Reading a textbook like Operating Systems: Three Easy Pieces should be worthwhile.

You can use pthread_kill() to check if a thread exists.
SYNOPSIS
#include <signal.h>
int pthread_kill(pthread_t thread, int sig);
DESCRIPTION
The pthread_kill() function shall request that a signal be delivered
to the specified thread.
As in kill(), if sig is zero, error checking shall be performed
but no signal shall actually be sent.
Something like
int rc = pthread_kill( thread_id, 0 );
if ( rc != 0 )
{
// thread no longer exists...
}
It's not very useful, though, as stated by others elsewhere, and it's really weak as any type of security measure. Anything with permissions to kill a thread will be able to stop it from running without killing it, or make it run arbitrary code so that it doesn't do what you want.

Related

Preventing thread termination from affecting parent process in C?

My goal is to create an event handling infrastructure that will allow for registration of callback functions and calls to such functions based on time. Further, I plan to make the callback handler multithreaded as there are no restrictions on the type of callbacks, so a sequential architecture could cause unwanted blocking.
From my research I found that if a thread experiences undefined behavior and is terminated (.i.e. with SIGSEGV) then the entire process exits - which is obviously undesirable.
The question, then, is what options are there for ensuring thread independence? I do not think forking is a viable option in this case since the callbacks are not fully fledged programs, but rather simple routines to do various time-based tasks.
Correct me if I'm wrong.. If you want time-based tasks, I highly recommend you to try semaphores to control thread.
Block thread function like :
while(1){
sem_wait(my_semaphore);
code_that_needs_to_be_done_in_thread;
}
when you need your work in thread to be done just signal it from code whenever you want and howmany times you want:
sem_post(my_semaphore);
...
other_code;
sem_post(my_semaphore);
...

Where does the forked process start from if a call of fork in a thread occurs?

I'm going to write a program in which the main thread creates new thread and then the new thread creates a child process. Since I have a hard time keeping track of the new thread and forked process, I'd like to gain a wise answer from someone.
My question is
1. Does a created process in a thread start to execute codes after pthread_create?
2. If 1 is not, where does the forked process start from if a call of fork in a thread occurs?
Thank you for reading my question.
Some of this is a bit OS-dependent, as different systems have different POSIX thread implementations and this can expose internals.
POSIX offers pthread_atfork as a somewhat blunt instrument for dealing with some of the issues, but it still looks pretty messy to me.
If your system uses a one-to-one map between "user land thread" and "kernel thread" using clone or rfork to achieve proper user-space sharing of data between threads, then fork will merely duplicate the (single) thread that calls it. However, if your system has a many-to-many style mapping (so that one user process is handling multiple threads, at least before they enter into blocking syscalls), fork may internally duplicate multiple threads. POSIX says it should look like it only duplicated one thread, so that's not supposed to be visible, but I'm not sure how well all systems implement this.
There's some general advice at http://www.linuxprogrammingblog.com/threads-and-fork-think-twice-before-using-them (Linux-centric, obviously, but still useful).
Is there some particular reason you want to fork inside a thread but not exec? In general, if you just want to run more code in parallel, you just spin off yet another thread (i.e., once you choose to run any threads, you do everything in threads, except if you have to fork for exec; if the exec fails, just _exit).

Recreate dead threads after a fork

As you might know, all threads in the application die in a forked process, other than the thread doing the fork. However, I plan to ressurrect those threads in the forked process by calling pthread_create and using pthread_attr_setstack, so as to assign the newly created threads the same stack as the dead threads. Something like as follows.
// stackAddr and stacksize taken from the dead thread
pthread_attr_setstack(&attr, stackAddr, stacksize);
rc = pthread_create(&thread, &attr, threadRoutine, NULL);
However, I would still need to get the CPU register values, such as stack pointer, base pointer, instruction pointer etc, to restart threads from the same point. How can I do that? And what else do I need to do to successfully achieve my goal?
Also note that I'm using a 64-bit architecture. What additional difficulties would it have as compared to 32-bit one?
I see two possible ways to shoot yourself in the foot and lose hair^W^W^W^W^W^W^W^Wtry to do this:
Try to force each thread into calling getcontext() before the fork(), and then restore the context of each thread via setcontext(). Probably won't work, but you can try for fun.
Save ptrace(PTRACE_GETREGS), ptrace(PTRACE_GETFPREGS), and restore with ptrace(PTRACE_SETREGS), ptrace(PTRACE_SETFPREGS).
The other threads in the current process aren't killed by a fork -- they're still there and running in the parent. The problem you seem to have is that fork only forks a SINGLE thread in the current procces, creating a new process running one thread with a copy of all non-thread resources in the parent.
What you apparently want is a way of duplicating an entire multithreaded task, forking all the threads in it and creating a new process/task with the same number of threads.
In order to do THAT, you would need to find and pause all the other threads in the process, dump their current state (including all locks they hold), fork a new process, and then (re)create each of those other threads in the child, rewiring the lock state to refer to the new child threads where needed.
Unfortunately, the POSIX pthread interface is hopelessly underspecified, and provides no way of doing that. In particular, it lacks any sort of reflective interface allowing you to figure out what threads are actually running.
If you want to try to do this anyway, I can see two ways of trying to approach this:
poke around in /proc/self/task to figure out what threads are running in your process, effectively getting that reflective interface in a highly non-portable way. You'll likely end up having to ptrace(2) the other threads to get their internal state. This will be very difficult.
wrap the pthreads library -- instead of using library directly, intercept every call and keep track of all the threads/mutexes/locks that get created, so that you have that information available when you want to fork. This will work fine as long as you don't want to use any third-party libraries that use pthreads
The second option is much easier (and somewhat portable), but only works well if you have access to all the source code of your entire application, and can modify it to use your wrappers properly.
Just googling around I found that solaris has a forkall() call that does exactly what you want, see the documentation here:
http://download.oracle.com/docs/cd/E19963-01/html/821-1601/gen-1.html
I assume you're running on linux, but it is possible to run solaris on x86 hardware. So maybe that is an option for you.

Implementing blocking between pthreads without conditional variables

I'm implementing a boss/worker design pattern using pthreads on Linux. I want to have a boss thread that constantly checks for work, and if there is work, then wakes up a sleeping worker to do the work. My question is: what type of IPC synchronization/mechanism should I use to achieve the least latency between my boss thread handing off to my worker, and my worker waking up?
The easy solution is to use Pthread conditional variables and call pthread_cond_signal in the boss thread, and pthread_cond_wait in each of the worker threads, but I'm wondering
is there something faster that I can use to implement the blocking and signaling? For example, how would using pipes between the boss and worker threads fare?
how can I measure the performance of one type of IPC versus another? For example, I see benchmarks for pipe()'s and fork()'s, but nothing for using pipe()'s as an interthread communication.
Let me know if I can clarify anything in my questions!
EDIT
As an example of how I would use pipe()'s to implement blocking between my worker and boss threads, the worker thread would read() a pipe, and since it's empty would then block on that read call until the boss calls write() on it.
The glibc implementation of pthreads uses the low-level "futex" locks to implement pthread_cond_wait() / pthread_cond_signal(). Futexes were designed to be a fast synchronisation primitive, so these are likely to outperform pipes or similar methods (at the very least, using pipes requires copying a byte to and from kernel space that isn't needed for futexes).
If pthread_cond_wait() / pthread_cond_signal() map well onto your problem (and it sounds like they do), then the only way to outperform them is likely to be to implement something on futexes yourself (for example, you could eliminate the handling of thread cancellation if you do not use that).
It is probably worthwhile benchmarking your application - unless your work units are very small indeed, then the condition variable wakeup latency is unlikely to dominate.
What you should do first is being sure you need something faster. Since pthread signaling is implemented using futex, where futex stands for fast user space mutex, I don't think you can out perform them.
If you have waiting threads, by definition you will have to wake them up, and this round trip through the kernel will be the source of your unwanted latency.
But what you should do is really think about your problem :
if you constantly have work to do, then your worker thread is always busy. Work will be done when previous work is finished, and you don't care about the latency.
If what matters is the latency between the boss detecting an event and the worker starting to work, then why do you use a boss -> worker pattern ?
My advice would be to look for a faster thing when you really need it, at this time you will probably have a much mre detailed question to ask. Maybe I am wrong, but it looks like you are trying to optimize preemptively, which as you perhaps know is the root of all evil. Of course, bad design can lead to massive rework, but here you are dealing with a very small detail of your real design decision which is using a boss / worker pattern.
Implement your design with pthread_signal, or perhaps semp_post() / sem_wait(), and then look where your latency really is, and if it is really a problem.
I would guess signal and wait would be the best. Most OS recognize threads and can have them just idle until the interrupt comes. With pipes the worker would have to keep waking up and checking the pipe for output. The best testing I've found for efficiency has usually been using the unix command to get the running time from start to finish(assuming the program isn't meant to keep running in the background), set up a script to do it a few times and compare.

detect program termination (C, Windows)

I have a program that has to perform certain tasks before it finishes. The problem is that sometimes the program crashes with an exception (like database cannot be reached, etc).
Now, is there any way to detect an abnormal termination and execute some code before it dies?
Thanks.
code is appreciated.
1. Win32
The Win32 API contains a way to do this via the SetUnhandledExceptionFilter function, as follows:
LONG myFunc(LPEXCEPTION_POINTERS p)
{
printf("Exception!!!\n");
return EXCEPTION_EXECUTE_HANDLER;
}
int main()
{
SetUnhandledExceptionFilter((LPTOP_LEVEL_EXCEPTION_FILTER)&myFunc);
// generate an exception !
int x = 0;
int y = 1/x;
return 0;
}
2. POSIX/Linux
I usually do this via the signal() function and then handle the SIGSEGV signal appropriately. You can also handle the SIGTERM signal and SIGINT, but not SIGKILL (by design). You can use strace() to get a backtrace to see what caused the signal.
There are sysinternals forum threads about protecting against end-process attempts by hooking NT Internals, but what you really want is either a watchdog or peer process (reasonable approach) or some method of intercepting catastrophic events (pretty dicey).
Edit: There are reasons why they make this difficult, but it's possible to intercept or block attempts to kill your process. I know you're just trying to clean up before exiting, but as soon as someone releases a process that can't be immediately killed, someone will ask for a method to kill it immediately, and so on. Anyhow, to go down this road, see above linked thread and search some keywords you find in there for more. hook OR filter NtTerminateProcess etc. We're talking about kernel code, device drivers, anti-virus, security, malware, rootkit stuff here. Some books to help in this area are Windows NT/2000 Native API, Undocumented Windows 2000 Secrets: A Programmer's Cookbook, Rootkits: Subverting the Windows Kernel, and, of course, Windows® Internals: Fifth Edition. This stuff is not too tough to code, but pretty touchy to get just right, and you may be introducing unexpected side-effects.
Perhaps Application Recovery and Restart Functions could be of use? Supported by Vista and Server 2008 and above.
ApplicationRecoveryCallback Callback Function Application-defined callback function used to save data and application state information in the event the application encounters an unhandled exception or becomes unresponsive.
On using SetUnhandledExceptionFilter, MSDN Social discussion advises that to make this work reliably, patching that method in-memory is the only way to be sure your filter gets called. Advises to instead wrap with __try/__except. Regardless, there is some sample code and discussion of filtering calls to SetUnhandledExceptionFilter in the article "SetUnhandledExceptionFilter" and VC8.
Also, see Windows SEH Revisited at The Awesome Factor for some sample code of AddVectoredExceptionHandler.
It depends what do you do with your "exceptions". If you handle them properly and exit from program, you can register you function to be called on exit, using atexit().
It won't work in case of real abnormal termination, like segfault.
Don't know about Windows, but on POSIX-compliant OS you can install signal handler that will catch different signals and do something about it. Of course you cannot catch SIGKILL and SIGSTOP.
Signal API is part of ANSI C since C89 so probably Windows supports it. See signal() syscall for details.
If it's Windows-only, then you can use SEH (SetUnhandledExceptionFilter), or VEH (AddVectoredExceptionHandler, but it's only for XP/2003 and up)
Sorry, not a windows programmer. But maybe
_onexit()
Registers a function to be called when program terminates.
http://msdn.microsoft.com/en-us/library/aa298513%28VS.60%29.aspx
First, though this is fairly obvious: You can never have a completely robust solution -- someone can always just hit the power cable to terminate your process. So you need a compromise, and you need to carefully lay out the details of that compromise.
One of the more robust solutions is putting the relevant code in a wrapper program. The wrapper program invokes your "real" program, waits for its process to terminate, and then -- unless your "real" program specifically signals that it has completed normally -- runs the cleanup code. This is fairly common for things like test harnesses, where the test program is likely to crash or abort or otherwise die in unexpected ways.
That still gives you the difficulty of what happens if someone does a TerminateProcess on your wrapper function, if that's something you need to worry about. If necessary, you could get around that by setting it up as a service in Windows and using the operating system's features to restart it if it dies. (This just changes things a little; someone could still just stop the service.) At this point, you probably are at a point where you need to signal successful completion by something persistent like creating a file.
I published an article at ddj.com about "post mortem debugging" some years ago.
It includes sources for windows and unix/linux to detect abnormal termination. By my experience though, a windows handler installed using SetUnhandledExceptionFilter is not always called. In many cases it is called, but I receive quite a few log files from customers that do not include a report from the installed handlers, where i.e. an ACCESS VIOLATION was the cause.
http://www.ddj.com/development-tools/185300443

Resources