openSSL read/write thread safety - c

We have service using openSSL version 1.0.2h in multi threaded environment.
First thread runs blocking read, the other one is doing periodical writes.
It crashes from time to time somewhere inside libssl.so in SSL_write function. Code calling SSL_write looks absolutely legal, it operates with buffer allocated on stack of the calling function. Also crash is very rare which suggests it might be race condition.
I found the following article saying that using a single SSL object in two threads, one each for reading and writing is not safe, though CRYPTO_set_locking_callback is set. Is that correct? If yes, than what is the suggested way to resolve this? If I block mutex on a blocking read, I will not able to write.

We suggest modifying the timeout thresholds.
Tracing and debugging race condition is difficult and eventually you will have to change timeout and/or buffer parameters. Better study these parameters right now.

Related

libaio and syncing file output

I've asked a similar question to this previously, but the comments I got make me think I didn't express myself well or something, so I deleted it and will try again. I have C code that is using libaio asynchronous I/O threads to write to a file. Later in the code, the memory locations that are being written are re-populated. Needless to say, I have to make sure the writing is complete before the re-populating starts. If I call fsync() before the re-population starts, will this cause the main thread to block until the writing from all threads are complete? The fsync man page seems to imply this, but I can't find a clear statement of it. There is also the aio_fsync function, but its man page says that "Note that this is a request only; it does not wait for I/O completion." But waiting for I/O completion is exactly what I need.
I know that I can check on all threads writing to said file, one by one, and wait until they are done. But I was hoping for a one-liner like just calling fsync(). Is there such a thing?

Better replacement for exit(), atexit() in C

I am new to C programming. I used to think using exit() was the cleanest way of process termination (as it is capable of removing temporary files, closing open files, normal process termination...), but when I tried man exit command on the terminal (Ubuntu 16.04.5, gcc 5.4.0) I saw the following line:
The exit() function uses a global variable that is not protected, so
it is not thread-safe.
After that I tried to make some research about better replacement for exit() (to change my programming behavior from the beginning). While doing that I faced with this question in which side effects of exit() is mentioned and it is suggested to use atexit() properly to solve the problem (at least partially).
There were some cases in which using abort() was preferred over exit(). On top of that, this question suggests that atexit() might also be harmful.
So here are my questions:
Is there any general and better way of process terminating (which is guaranteed to clean like exit() and is not harmful for the system at any case)?
If the answer to the first question is NO!, what is the best possible way of process terminating (including the cases in which they are most useful)?
what is the best possible way of process terminating
If going single threaded just use exit(), as your code is not going multi-threaded.
Else make sure all but one thread have ended before the last thread and then safely call exit() because of 1. above.
Given that power/hardware fails can happen at any time, the imposs.. extreme difficulty of reliably terminating threads with user code and the chaotic nature of the use of memory pools etc. in many non-trivial multithreaded apps, it is better to design apps and systems that can clean temp files etc. on start-up, rather than trying to micro-manage shutdown.
'Clean up all the resources you allocate before you exit' sounds like good advice in a classroom or lecture, but quickly becomes a whole chain of albatross round your neck when faced with a dozen threads, queues and pools in a continually changing dynamic system.
If you can, if you are running under a non trivial OS, let it do its job and clean up for you. It's much better at it than your user code will ever be.

C read and thread safety (linux)

What would happen if you call read (or write, or both) in two different thread, on the same file descriptor (lets says we are interested about a local file, and a it's a socket file descriptor), without using explicitly a synchronization mechanism?
Read and Write are syscall, so, on a single core CPU, it's probably unlucky that two read would be executed "at the same time". But with multiple cores...
What the linux kernel will do?
And let's be a bit more general : is the behavior always the same for other kernels (like BSDs) ?
Edit : According to the close documentation, we should be sure that the file descriptor isn't used by a syscall in an other thread. So it seams that explicit synchronization would be required before closing a file descriptor (and so, also around read/write if thread that may call it are still running).
Any system level (syscall) file descriptor access is thread safe in all mainstream UNIX-like OSes.
Though depending on the age they are not necessarily signal safe.
If you call read, write, accept or similar on a file descriptor from two different tasks then the kernel's internal locking mechanism will resolve contention.
For reads each byte may be only read once though and writes will go in any undefined order.
The stdio library functions fread, fwrite and co. also have by default internal locking on the control structures, though by using flags it is possible to disable that.
The comment about close is because it doesn't make a lot of sense to close a file descriptor in any situation in which some other thread might be trying to use it. So while it is 'safe' as far as the kernel is concerned, it can lead to odd, hard to diagnose corner cases.
If a thread closes a file descriptor while a second thread is trying to read from it, the second thread may get an unexpected EBADF error. Worse, if a third thread is simultaneously opening a new file, that might reallocate the same fd, and the second thread might accidentally read from the new file rather than the one it was expecting...
Have a care for those who follow in your footsteps
It's perfectly normal to protect the file descriptor with a mutex semaphore. It removes any dependence on kernel behaviour so your message boundaries are now certain. You then don't have to cite the last paragraph at the bottom of a 15,489 line manpage which explains why the mutex isn't necessary (I exaggerated, but you get my meaning)
It also makes it clear to anyone reading your code that the file descriptor is being used by more than one thread.
Fringe Benefit
There is a fringe benefit to using a mutex that way. Suppose you've got different messages coming from the different threads and some of those messages are more important than others. All you need to do is set the thread priorities to reflect their messages' importance. That way the OS will ensure that your messages will be sent in order of importance for minimal effort on your part.
The result would depend on how the threads are scheduled to run at that particular instant in time.
One way to potentially avoid undefined behavior with multi-threading is to assume that you are doing memory operations. E.g. updating a linked list or changing a variable, etc.
If you use mutex/semaphores/lock or some other synchronization mechanism, it should work as intended.

Serial communication C/C++ Linux thread safe?

My question is quite simple. Is reading and writing from and to a serial port under Linux thread-safe? Can I read and write at the same time from different threads? Is it even possible to do 2 writes simultaneously? I'm not planning on doing so but this might be interesting for others. I just have one thread that reads and another one that writes.
There is little to find about this topic.
More on detail—I am using write() and read() on a file descriptor that I obtained by open(); and I am doing so simultaneously.
Thanks all!
Roel
There are two aspects to this:
What the C implementation does.
What the kernel does.
Concerning the kernel, I'm pretty sure that it will either support this or raise an according error, otherwise this would be too easy to exploit. The C implementation of read() is just a syscall wrapper (See what happens after read is called for a Linux socket), so this doesn't change anything. However, I still don't see any guarantees documented there, so this is not reliable.
If you really want two threads, I'd suggest that you stay with stdio functions (fopen/fread/fwrite/fclose), because here you can leverage the fact that the glibc synchronizes these calls with a mutex internally.
However, if you are doing a blocking read in one thread, the other thread could be blocked waiting to write something. This could be a deadlock. A solution for that is to use select() to detect when there is some data ready to be read or buffer space to be written. This is done in a single thread though, but while the initial code is a bit larger, in the end this approach is easier and cleaner, even more so if multiple streams are involved.

Glib hash table issues with signal handling code

I've got some system level code that fires timers every once in a while, and has a signal handler that manages these signals when they arrive. This works fine and seems completely reasonable. There are also two separate threads running alongside the main program, but they do not share any variables, but use glib's async queues to pass messages in one direction only.
The same code uses glib's GHashTable to store, well, key/value pairs. When the signal code is commented out of the system, the hash table appears to operate fine. When it is enabled, however, there is a strange race condition where the call to g_hash_table_lookup actually returns NULL (meaning that there is no entry with the key used to look it up), when indeed the entry is actually there (yes I made sure by printing the whole list of key/value pairs with g_hash_table_foreach). Why would this occur most of the time? Is GLib's hash table implementation buggy? Sometimes the lookup call is successful.
It's a very particular situation, and I can clarify further if it didn't make sense, but I'm hoping I am doing something wrong so that this can actually be fixed.
More info: The code segments that are not within the signal handler scope but access the g_hash_table variable are surrounded by signal blocking calls so that the signal handler does not access these variables when the process was originally accessing them too.
Generally, signal handlers can only set flags and make system calls
As it happens, there are severe restrictions in ISO C regarding what signal handlers can do, and most library entry points and most API's are not even remotely 100% multi-thread-safe and approximately 0.0% of them are signal-handler-safe. That is, there is an absolute prohibition against calling almost anything from a signal handler.
In particular, for GHashTable, g_hash_table_ref() and g_hash_table_unref() are the only API elements that are even thread-safe, and none of them are signal-handler safe. Actually, ISO-C only allows signal handlers to modify objects declared with volatile sig_atomic_t and only a couple of library routines may be called.
Some of us consider threaded systems to be intrinsically dangerous, practically radioactive sources of subtle bugs. A good place to start worrying is The Problem with Threads. (And note that signal handlers themselves are much worse. No one thinks an API is safe there...)

Resources