thread safety in GNUTLS - c

The version of GNUTLS is 3.5. I want to use a child thread to handshake with remote peer.
In my child thread, I just use gnutls_handshake().
In the parent thread, can I use pthread_cancel() to cancel the child thread safety regardless of the current handshake state?
If I have registered the pull/pull_timeout/push functions with pthread_cleanup_push/pthread_cleanup_pop,
can I cancel the child thread?

i have emailed Nikos Mavrogiannopoulos(current maintainers of gnutls) ,his answer as follow:
gnutls functions were never designed as pthread cancellation points. I
have not thought that much, but I believe your main concern is memory
leaks right? It may be that if you deallocate the session in another
thread it may just work; though you may have to create a stress test
for that to verify that this is possible.

Related

Creating a pthreads thread pool to handle get requests

I find it hard to believe there isn't an answer or tutorial for this, but am struggling to find one anywhere!
I have to (and have) build a multithreaded server to handle GET requests in C.
For full marks this needs to use a thread pool. Currently my main thread accepts connections and passes them on to a new thread.
I can find a few implementations of thread pools in c online, but coming from a Java background understanding them is proving difficult. They also all seem to use a task queue.
This seems unnecessary considering you can tell the listen call to queue connections.
I saw somewhere that accept is thread safe (saying that I also hear when POSIX says safe its more of a safeish?)
Is this a sensible approach to take? Or will the overhead be higher with each thread waiting on accept instead of stopping exection until passed a connection?
If that is the case how in C would I go about doing this? I presume i would need to keep a thread safe data structure storing pointers to each thread and a value indicating if they are busy or not?
And have some method to restart the thread and pass it a connection? But I have no idea how to do this and can't find any simple tutorials on the internet.
Any advice or links to tutorials would be much appreciated!
Thanks
Accept() is thread-safe.
Actually what you describe is an elegant way to implement a socket server using a thread pool - call accept() in all of them, and the operating system will take care of waking only one thread when a connection arrives. Good job, I have never really thought about this option when I had to implement such things.
As far as I see there's no real overhead in calling accept() in multiple threads at the same time - all threads will sleep until a connection can be accepted, so they won't effectively consume any CPU time.

How to trigger spurious wake-up within a Linux application?

Some background:
I have an application that relies on third party hardware and a closed source driver. The driver currently has a bug in it that causes the device to stop responding after a random period of time. This is caused by an apparent deadlock within the driver and interrupts proper functioning of my application, which is in an always-on 24/7 highly visible environment.
What I have found is that attaching GDB to the process, and immediately detaching GDB from the process results in the device resuming functionality. This was my first indication that there was a thread locking issue within the driver itself. There is some kind of race condition that leads to a deadlock. Attaching GDB was obviously causing some reshuffling of threads and probably pushing them out of their wait state, causing them to re-evaluate their conditions and thus breaking the deadlock.
The question:
My question is simply this: is there a clean wait for an application to trigger all threads within the program to interrupt their wait state? One thing that definitely works (at least on my implementation) is to send a SIGSTOP followed immediately by a SIGCONT from another process (i.e. from bash):
kill -19 `cat /var/run/mypidfile` ; kill -18 `cat /var/run/mypidfile`
This triggers a spurious wake-up within the process and everything comes back to life.
I'm hoping there is an intelligent method to trigger a spurious wake-up of all threads within my process. Think pthread_cond_broadcast(...) but without having access to the actual condition variable being waited on.
Is this possible, or is relying on a program like kill my only approach?
The way you're doing it right now is probably the most correct and simplest. There is no "wake all waiting futexes in a given process" operation in the kernel, which is what you would need to achieve this more directly.
Note that if the failure-to-wake "deadlock" is in pthread_cond_wait but interrupting it with a signal breaks out of the deadlock, the bug cannot be in the application; it must actually be in the implementation of pthread condition variables. glibc has known unfixed bugs in its condition variable implementation; see http://sourceware.org/bugzilla/show_bug.cgi?id=13165 and related bug reports. However, you might have found a new one, since I don't think the existing known ones can be fixed by breaking out of the futex wait with a signal. If you can report this bug to the glibc bug tracker, it would be very helpful.

How to find whether a pthread has pending cancellation request

I want to find whether for a thread, pthread_cancel has been called or not.
I don't want to use some tables and to maintain that. Is there any library function available for this? I don't want to cancel the thread using some cancellation point functions which cancel the thread if there is any pending cancellation request, I just want to know whether there is any pending cancellation request or not.
Even if you call pthread_cancel, it is not executed immediately. Processor takes its own time to cancel the thread. But, just to mentioned here: usage of asynchronous cancellation or immediate cancellation is not recommended. I faced an issue where my application was crashing during cancellation when using the pthread_setcanceltype() as PTHREAD_CANCEL_ASYNCHRONOUS. I really could not find a reason why.
As far as I know and even I have searched a lot for the same while debugging the crash scenario, we dont have a method to confirm at what point is the thread cancelled.

nptl SIGCONT and thread scheduling

I'm trying to port a code that relies on SIGCONT to stop certain threads of an application. With current linux nptl implementation seems one can't rely on that in 2.6.x kernels. I'm trying to devise a method to stop other threads. Currently I can only think on mutexes and condition variables. Any hints is appreciated.
If you are relying on stopping and resuming other threads, then your application will eventually fail.
That is because, you cannot guarantee that you're not going to stop a thread while it has a mutex taken which protects a shared resource. This would result in deadlock, as any other threads (possibly including the one which stopped the first thread) which then need to wait for the mutex, will wait forever.
I'm sure it is possible, but also, you're doing it wrong.
NB: such mutexes probably exist in parts of the C library, even if you have none in your own code. If you have none in your own code and it is nontrivial, I'd be surprised.
How are you sending the signals to the target thread? If you use pthread_kill() to send SIGSTOP / SIGCONT to a single thread, it should work.

Shared POSIX objects cleanup on process end / death

Is there any way to perform POSIX shared synchronization objects cleanup especially on process crash? Locked POSIX semaphores unblock is most desired thing but automatically 'collected' queues / shared memory region would be nice too. Another thing to keep eye on is we can't in general use signal handlers because of SIGKILL which cannot be caught.
I see only one alternative: some external daemon which accepts subscriptions and 'keep-alive' requests working as watchdog so not having notifications about some object it could close / unlock object in accordance to registered policy.
Has anyone better alternative / proposition? I never worked seriously with POSIX shared objects before (sockets were enough for all my needs and are much more useful by my opinion) and I did not found any applicable article. I'd gladly use sockets here but can't because of historical reasons.
Rather than using semaphores you could use file locking to co-oridinate your processes. The big advanatge of file locks being that they are released if the process terminates. You can map each semaphore onto a lock for a byte in a shared file and know that locks will get released on exit; in mosts version of unix the bytes you lock don't even have to exist. There is code for this in Marc Rochkind's book Advanced Unix Programming 1st edition, don't know if it's in the latest 2nd edition though.
I know this question is old, but another great solution is POSIX robust mutexes. They automatically unlock and enter an "inconsistent flag" state when the owner dies, and the next thread to attempt locking the mutex gets an EOWNERDEAD error but succeeds in becoming the new owner of the mutex. It's then able to clean up whatever state the mutex was protecting (which could be in a very bad inconsistent state due to asynchronous termination of the previous owner!) and mark the mutex as consistent again before unlocking it.
See the documentation on robust mutexes here:
http://pubs.opengroup.org/onlinepubs/9699919799/functions/pthread_mutex_lock.html
The usual way is to work with signal handlers. Just catch the signals and call the cleanup functions.
But your watchdog daemon has some merits, too. It would surely make the system more simple to understand and manage. To make it more simple to administrate, your application should start the daemon when it's not running and the daemon should be able to clean up any residue from the last crash.

Resources