NIO and visibility of ByteBuffers across threads - nio

It seems that in implementing servers that rely on java nio, the following practice is standard:
Use a single thread (and a single selector) for reads:
ByteBuffer buffer = . . .
selector.select()
...
channel.read(buffer)
if (isRequestComplete(buffer))
processRequest(buffer) //will run in a separate thread
void processRequest(ByteBuffer buffer)
new Thread(new Handler(buffer)).start()
Yes, I'm elliding a substantial amount of code
My question is not about the mechanics of selecting/reading from a channel, but rather the visibility semantics of the #buffer being read in a separate thread than the selector thread. I can see nothing in the javadoc stating that reading from a ByteBuffer in Thread A ('Handler' in above) is guaranteed to see the write to the ByteBuffer in Thread B ('Selector' in above).
Now, it seems to me that the code above simply is not multi-thread-safe, i.e. it's wrong.
But I've seen the same pattern in numerous tutorials and even some codebases (always without even referencing the visibility concern); so I'm wondering if I'm missing something obvious.
Note: I'm specifically focussing on the situation where the 'Handler' Thread only ever reads from #buffer - it never writes new bytes into it

This actually seems OK. What you need is to prove that the write in the selector thread "happens-before" the read in the handler thread. According to the memory visibility rules, actions in the same thread happen in program order. The creation of a new thread also establishes happens-before. So the write happens-before thread creation which happens-before the read.
This would also be OK if you pass it into e.g. an ExecutorService.

Related

Are we allowed to call pthread_key_create and pthread_key_delete from different threads? Should lmdb make this clear?

Details about the issue leading to the question
We're facing a SIGSEGV error under extremely rare circumstances when using the lmdb database library that are not easily reproducible. The best we got out of it is a stack-trace that looks like this through the core dump:
#0 in mdb_env_reader_dest (ptr=...) at .../mdb.c: 4935
#1 in __nptl_deallocate_tsd () at pthread_create.c:301
...
The line the stack-trace is pointing to is this (it's not the same because we attempted some changes to fix this).
Having tracked the issue for a while, the only explanation I have is that, when closing the lmdb environment from a thread other than the main thread, there's some kind of race between this line and this line, where the latter deletes the key, calls the custom destructor mdb_env_reader_dest, which causes SIGSEGV according to the documentation of lmdb, when attempting to use resources after freeing them.
The question
The documentation of pthread_key_create and pthread_key_delete are ambiguous to me, personally, in the sense whether they're talking about the calls to pthread_key_create and pthread_key_delete or the data underneath the pointers. This is what the docs say:
The associated destructor functions used to free thread-specific data at thread exit time are only guaranteed to work correctly when called in the thread that allocated the thread-specific data.
So the question is, can we call mdb_env_open and mdb_env_close from two different threads, leading to pthread_key_create and pthread_key_delete from different threads, and expect correct behavior?
I couldn't find such a requirement in the lmdb documentation. The closest I could find is this, which doesn't seem to reference the mdb_env_open function.
Are we allowed to call pthread_key_create and pthread_key_delete from different threads?
Yes.
However, a key must be created via pthread_key_create before it can be used by any thread, and that is not, itself, inherently thread safe. The key creation is often synchronized by performing it before starting any (other) threads that will use the key, but there are other alternatives.
Similarly, a key must not be deleted before all threads are finished with it, and the deletion is not, itself, inherently thread safe. TSD keys often are not deleted at all, and when they are deleted, that is often synchronized by first joining all (other) threads that may use the key. But again, there are other alternatives.
The documentation of pthread_key_create and pthread_key_delete are
ambiguous to me, personally, in the sense whether they're talking
about the calls to pthread_key_create and pthread_key_delete or the
data underneath the pointers. This is what the docs say:
The associated destructor functions used to free thread-specific data
at thread exit time are only guaranteed to work correctly when called
in the thread that allocated the thread-specific data.
The destructor functions those docs are talking about are the ones passed as the second argument to pthread_key_create().
And note well that that text is drawn from the Rationale section of the docs, not the Description section. It is talking about why the TSD destructor functions are not called by pthread_key_delete(), not trying to explain what the function does. That particular point is that TSD destructor functions must run in each thread carrying non-NULL TSD, as opposed to in the thread that calls pthread_key_delete().
So the question is, can we call mdb_env_open and mdb_env_close from
two different threads, leading to pthread_key_create and
pthread_key_delete from different threads, and expect correct
behavior?
The library's use of thread-specific data does not imply otherwise. However, you seem to be suggesting that there is a race between two different lines in the same function, mdb_env_close0, which can be the case only if that function is called in parallel by two different threads. The MDB docs say of mdb_env_close() that "Only a single thread may call this function." I would guess that they mean that to be scoped to a particular MDB environment. In any case, if you really have the race you think you have, then it seems to me that your program must be calling mdb_env_close() from multiple threads, contrary to specification.
So, as far as I know or can tell, the thread that calls mdb_env_close() does not need to be the same one that called mdb_env_open() (or mdb_env_create()), but it does need to be the only one that calls it.

How to resuse threads - pthreads c

I am programming using pthreads in C.
I have a parent thread which needs to create 4 child threads with id 0, 1, 2, 3.
When the parent thread gets data, it will set split the data and assign it to 4 seperate context variables - one for each sub-thread.
The sub-threads have to process this data and in the mean time the parent thread should wait on these threads.
Once these sub-threads have done executing, they will set the output in their corresponding context variables and wait(for reuse).
Once the parent thread knows that all these sub-threads have completed this round, it computes the global output and prints it out.
Now it waits for new data(the sub-threads are not killed yet, they are just waiting).
If the parent thread gets more data the above process is repeated - albeit with the already created 4 threads.
If the parent thread receives a kill command (assume a specific kind of data), it indicates to all the sub-threads and they terminate themselves. Now the parent thread can terminate.
I am a Masters research student and I am encountering the need for the above scenario. I know that this can be done using pthread_cond_wait, pthread_Cond_signal. I have written the code but it is just running indefinitely and I cannot figure out why.
My guess is that, the way I have coded it, I have over-complicated the scenario. It will be very helpful to know how this can be implemented. If there is a need, I can post a simplified version of my code to show what I am trying to do(even though I think that my approach is flawed!)...
Can you please give me any insights into how this scenario can be implemented using pthreads?
As far what can be seen from your description, there seems to be nothing wrong with the principle.
What you are trying to implement is a worker pool, I guess, there should be a lot of implementations out there. If the work that your threads are doing is a substantial computation (say at least a CPU second or so) such a scheme is a complete overkill. Mondern implementations of POSIX threads are efficient enough that they support the creation of a lot of threads, really a lot, and the overhead is not prohibitive.
The only thing that would be important if you have your workers communicate through shared variables, mutexes etc (and not via the return value of the thread) is that you start your threads detached, by using the attribute parameter to pthread_create.
Once you have such an implementation for your task, measure. Only then, if your profiler tells you that you spend a substantial amount of time in the pthread routines, start thinking of implementing (or using) a worker pool to recycle your threads.
One producer-consumer thread with 4 threads hanging off it. The thread that wants to queue the four tasks assembles the four context structs containing, as well as all the other data stuff, a function pointer to an 'OnComplete' func. Then it submits all four contexts to the queue, atomically incrementing a a taskCount up to 4 as it does so, and waits on an event/condvar/semaphore.
The four threads get a context from the P-C queue and work away.
When done, the threads call the 'OnComplete' function pointer.
In OnComplete, the threads atomically count down taskCount. If a thread decrements it to zero, is signals the the event/condvar/semaphore and the originating thread runs on, knowing that all the tasks are done.
It's not that difficult to arrange it so that the assembly of the contexts and the synchro waiting is done in a task as well, so allowing the pool to process multiple 'ForkAndWait' operations at once for multiple requesting threads.
I have to add that operations like this are a huge pile easier in an OO language. The latest Java, for example, has a 'ForkAndWait' threadpool class that should do exactly this kind of stuff, but C++, (or even C#, if you're into serfdom), is better than plain C.

Glib hash table issues with signal handling code

I've got some system level code that fires timers every once in a while, and has a signal handler that manages these signals when they arrive. This works fine and seems completely reasonable. There are also two separate threads running alongside the main program, but they do not share any variables, but use glib's async queues to pass messages in one direction only.
The same code uses glib's GHashTable to store, well, key/value pairs. When the signal code is commented out of the system, the hash table appears to operate fine. When it is enabled, however, there is a strange race condition where the call to g_hash_table_lookup actually returns NULL (meaning that there is no entry with the key used to look it up), when indeed the entry is actually there (yes I made sure by printing the whole list of key/value pairs with g_hash_table_foreach). Why would this occur most of the time? Is GLib's hash table implementation buggy? Sometimes the lookup call is successful.
It's a very particular situation, and I can clarify further if it didn't make sense, but I'm hoping I am doing something wrong so that this can actually be fixed.
More info: The code segments that are not within the signal handler scope but access the g_hash_table variable are surrounded by signal blocking calls so that the signal handler does not access these variables when the process was originally accessing them too.
Generally, signal handlers can only set flags and make system calls
As it happens, there are severe restrictions in ISO C regarding what signal handlers can do, and most library entry points and most API's are not even remotely 100% multi-thread-safe and approximately 0.0% of them are signal-handler-safe. That is, there is an absolute prohibition against calling almost anything from a signal handler.
In particular, for GHashTable, g_hash_table_ref() and g_hash_table_unref() are the only API elements that are even thread-safe, and none of them are signal-handler safe. Actually, ISO-C only allows signal handlers to modify objects declared with volatile sig_atomic_t and only a couple of library routines may be called.
Some of us consider threaded systems to be intrinsically dangerous, practically radioactive sources of subtle bugs. A good place to start worrying is The Problem with Threads. (And note that signal handlers themselves are much worse. No one thinks an API is safe there...)

Is there a way to ‘join’ (block) in POSIX threads, without exiting the joinee?

I’m buried in multithreading / parallelism documents, trying to figure out how to implement a threading implementation in a programming language I’ve been designing.
I’m trying to map a mental model to the pthreads.h library, but I’m having trouble with one thing: I need my interpreter instances to continue to exist after they complete interpretation of a routine (the language’s closure/function data type), because I want to later assign other routines to them for interpretation, thus saving me the thread and interpreter setup/teardown time.
This would be fine, except that pthread_join(3) requires that I call pthread_exit(3) to ‘unblock’ the original thread. How can I block the original thread (when it needs the result of executing the routine), and then unblock it when interpretation of the child routine is complete?
Use a pthread_cond_t; wait on it on one thread and signal or broadcast it in the other.
Sounds like you actually want an implementation of the Thread Pool Pattern. It makes for a fairly simple conceptual model, without repeated thread creation & tear down costs. Some OS's directly support it, on others it should be reasonably simple to implement using a queue and a semaphore.

How to signal a buffer full state between posix threads

I have two threads, the main thread 'A' is responsible for message handling between a number of processes. When thread A gets a buffer full message, it should inform thread B and pass a pointer to the buffer which thread B will then process.
When thread B has finished it should inform thread A that it has finished.
How do I go about implementing this using posix threads using C on linux. I have looked at conditional variables, is this the way to go? . I'm not experienced in multi threaded programming and would like some advice on the best avenue to take.
Thanks
If you relax the conditions that the buffer must be full before B starts processing it and that the buffer must be empty before A starts filling it again, then this is the classic producer-consumer problem.
If you cannot relax those conditions, then I do not see the benefit of separating the functionality between two threads. Since thread A cannot add to the buffer while thread B is processing, and thread B cannot do any processing while thread A is adding to the buffer, then all the work can be done in a single thread.
Yes, conditional variables and mutexes are two things you have to use when implement your solution.
You can take a look at the section "A few ways to use threads" on explanation how to do it.
How about using a posix semaphore to represent 'number of filled buffers'. The pointers could be passed over a shared ring buffer. Depending on how you want to handle overflows, you may need to protect it with a mutex.

Resources