Synchronize two processes using two different states - c

I am trying to work out a way to synchronize two processes which share data.
Basically I have two processes linked using shared memory. I need process A to set some data in the shared memory area, then process B to read that data and act on it.
The sequence of events I am looking to have is:
B blocks waiting for data available signal
A writes data
A signals data available
B reads data
B blocks waiting for data not available signal
A signals data not available
All goes back to the beginning.
In other terms, B would block until it got a "1" signal, get the data, then block again until that signal went to "0".
I have managed to emulate it OK using purely shared memory, but either I block using a while loop which consumes 100% of CPU time, or I use a while loop with a nanosleep in it which sometimes misses some of the signals.
I have tried using semaphores, but I can only find a way to wait for a zero, not for a one, and trying to use two semaphores just didn't work. I don't think semaphores are the way to go.
There will be numerous processes all accessing the same shared memory area, and all processes need to be notified when that shared memory has been modified.
It's basically trying to emulate a hardware data and control bus, where events are edge rather than level triggered. It's the transitions between states I am interested in, rather than the states themselves.
So, any ideas or thoughts?

Linux has its own eventfd(2) facility that you can incorporate into your normal poll/select loop. You can pass eventfd file descriptor from process to process through a UNIX socket the usual way, or just inherit it with fork(2).
Edit 0:
After re-reading the question I think one of your options is signals and process groups: start your "listening" processes under the same process group (setpgid(2)), then signal them all with negative pid argument to kill(2) or sigqueue(2). Again, Linux provides signalfd(2) for polling and avoiding slow signal trampolines.

If 2 processes are involved you can use a file , shared memory or even networking to pass the flag or signal. But if the processes are more, there may be some suitable solutions in modifying the kernel. There is one shared memory in your question, right ?! How the signals are passed now ?!

In linux, all POSIX control structures (mutex, conditions, read-write-locks, semaphores) have an option such that they also can be used between processes if they reside in shared memory. For the process that you describe a classic mutex/condition pair seem to fit the job well. Look into the man pages of the ..._init functions for these structures.
Linux has other proper utilities such as "futex" to handle this even more efficiently. But these are probably not the right tools to start with.

1 Single Reader & Single Writer
1 Single Reader & Single Writer
This can be implemented using semaphores.
In posix semaphore api, you have sem_wait() which will wait until value of the semaphore count is zero once it is incremented using sem_post from other process the wait will finish.
In this case you have to use 2 semaphores for synchronization.
process 1 (reader)
sem_wait(sem1);
.......
sem_post(sem2);
process 2(writer)
sem_wait(sem2);
.......
sem_post(sem1);
In this way you can achieve synchronization in shared memory.

Related

Is fork Thread Safe? [duplicate]

Let me explain: I have already been developing an application on Linux which forks and execs an external binary and waits for it to finish. Results are communicated by shm files that are unique to the fork + process. The entire code is encapsulated within a class.
Now I am considering threading the process in order to speed things up. Having many different instances of class functions fork and execute the binary concurrently (with different parameters) and communicate results with their own unique shm files.
Is this thread safe? If I fork within a thread, apart from being safe, is there something I have to watch for? Any advice or help is much appreciated!
The problem is that fork() only copies the calling thread, and any mutexes held in child threads will be forever locked in the forked child. The pthread solution was the pthread_atfork() handlers. The idea was you can register 3 handlers: one prefork, one parent handler, and one child handler. When fork() happens prefork is called prior to fork and is expected to obtain all application mutexes. Both parent and child must release all mutexes in parent and child processes respectively.
This isn't the end of the story though! Libraries call pthread_atfork to register handlers for library specific mutexes, for example Libc does this. This is a good thing: the application can't possibly know about the mutexes held by 3rd party libraries, so each library must call pthread_atfork to ensure it's own mutexes are cleaned up in the event of a fork().
The problem is that the order that pthread_atfork handlers are called for unrelated libraries is undefined (it depends on the order that the libraries are loaded by the program). So this means that technically a deadlock can happen inside of a prefork handler because of a race condition.
For example, consider this sequence:
Thread T1 calls fork()
libc prefork handlers are called in T1 (e.g. T1 now holds all libc locks)
Next, in Thread T2, a 3rd party library A acquires its own mutex AM, and then makes a libc call which requires a mutex. This blocks, because libc mutexes are held by T1.
Thread T1 runs prefork handler for library A, which blocks waiting to obtain AM, which is held by T2.
There's your deadlock and its unrelated to your own mutexes or code.
This actually happened on a project I once worked on. The advice I had found at that time was to choose fork or threads but not both. But for some applications that's probably not practical.
It's safe to fork in a multithreaded program as long as you are very careful about the code between fork and exec. You can make only re-enterant (aka asynchronous-safe) system calls in that span. In theory, you are not allowed to malloc or free there, although in practice the default Linux allocator is safe, and Linux libraries came to rely on it End result is that you must use the default allocator.
Back at the Dawn of Time, we called threads "lightweight processes" because while they act a lot like processes, they're not identical. The biggest distinction is that threads by definition live in the same address space of one process. This has advantages: switching from thread to thread is fast, they inherently share memory so inter-thread communications are fast, and creating and disposing of threads is fast.
The distinction here is with "heavyweight processes", which are complete address spaces. A new heavyweight process is created by fork(2). As virtual memory came into the UNIX world, that was augmented with vfork(2) and some others.
A fork(2) copies the entire address space of the process, including all the registers, and puts that process under the control of the operating system scheduler; the next time the scheduler comes around, the instruction counter picks up at the next instruction -- the forked child process is a clone of the parent. (If you want to run another program, say because you're writing a shell, you follow the fork with an exec(2) call, which loads that new address space with a new program, replacing the one that was cloned.)
Basically, your answer is buried in that explanation: when you have a process with many LWPs threads and you fork the process, you will have two independent processes with many threads, running concurrently.
This trick is even useful: in many programs, you have a parent process that may have many threads, some of which fork new child processes. (For example, an HTTP server might do that: each connection to port 80 is handled by a thread, and then a child process for something like a CGI program could be forked; exec(2) would then be called to run the CGI program in place of the parent process close.)
While you can use Linux's NPTL pthreads(7) support for your program, threads are an awkward fit on Unix systems, as you've discovered with your fork(2) question.
Since fork(2) is a very cheap operation on modern systems, you might do better to just fork(2) your process when you have more handling to perform. It depends upon how much data you intend to move back and forth, the share-nothing philosophy of forked processes is good for reducing shared-data bugs but does mean you either need to create pipes to move data between processes or use shared memory (shmget(2) or shm_open(3)).
But if you choose to use threading, you can fork(2) a new process, with the following hints from the fork(2) manpage:
* The child process is created with a single thread — the
one that called fork(). The entire virtual address space
of the parent is replicated in the child, including the
states of mutexes, condition variables, and other pthreads
objects; the use of pthread_atfork(3) may be helpful for
dealing with problems that this can cause.
Provided you quickly either call exec() or _exit() in the forked child process, you're ok in practice.
You might want to use posix_spawn() instead which will probably do the Right Thing.
My experience of fork()'ing within threads is really bad. The software generally fails pretty quickly.
I've found several solutions to the matter, although you may not like them much, I think these are generally the best way to avoid close to undebuggable errors.
Fork first
Assuming you know the number of external processes you need at the start, you can create them upfront and just have them sit there waiting for an event (i.e. read from a blocking pipe, wait on a semaphore, etc.)
Once you forked enough children you are free to use threads and communicate with those forked processes via your pipes, semaphores, etc. From the time you create a first thread, you cannot call fork anymore. Keep in mind that if you're using 3rd party libraries which may create threads, those have to be used/initialized after the fork() calls happened.
Note that you can then start using threads within the main and fork()'ed processes.
Know your state
In some circumstances, it may be possible for you to stop all of your threads to start a process and then restart your threads. This is somewhat similar to point (1) in the sense that you do not want threads running at the time you call fork(), although it requires a way for you to know about all the threads currently running in your software (something not always possible with 3rd party libraries).
Remember that "stopping a thread" using a wait is not going to work. You have to join with the thread so it is fully exited, because a wait require a mutex and those need to be unlocked when you call fork(). You just cannot know when the wait is going to unlock/re-lock the mutex and that's usually where you get stuck.
Choose one or the other
The other obvious possibility is to choose one or the other and not bother with whether you're going to interfere with one or the other. This is by far the simplest method if at all possible in your software.
Create Threads only when Necessary
In some software, one creates one or more threads in a function, use said threads, then joins all of them when exiting the function. This is somewhat equivalent to point (2) above, only you (micro-)manage threads as required instead of creating threads that sit around and get used when necessary. This will work too, just keep in mind that creating a thread is a costly call. It has to allocate a new task with a stack and its own set of registers... it is a complex function. However, this makes it easy to know when you have threads running and except from within those functions, you are free to call fork().
In my programming, I used all of these solutions. I used Point (2) because the threaded version of log4cplus and I needed to use fork() for some parts of my software.
As mentioned by others, if you are using a fork() to then call execve() then the idea is to use as little as possible between the two calls. That is likely to work 99.999% of the time (many people use system() or popen() with fairly good successes too and these do similar things). The fact is that if you do not hit any of the mutexes held by the other threads, then this will work without issue.
On the other hand, if, like me, you want to do a fork() and never call execve(), then it's not likely to work right while any thread is running.
What is actually happening?
The issue is that fork() create a separate copy of only the current task (a process under Linux is called a task in the kernel).
Each time you create a new thread (pthread_create()), you also create a new task, but within the same process (i.e. the new task shares the process space: memory, file descriptors, ownership, etc.). However, a fork() ignores those extra tasks when duplicating the currently running task.
+-----------------------------------------------+
| Process A |
| |
| +----------+ +----------+ +----------+ |
| | thread 1 | | thread 2 | | thread 3 | |
| +----------+ +----+-----+ +----------+ |
| | |
+----------------------|------------------------+
| fork()
|
+----------------------|------------------------+
| v Process B |
| +----------+ |
| | thread 1 | |
| +----------+ |
| |
+-----------------------------------------------+
So in Process B, we lose thread 1 & thread 3 from Process A. This means that if either or both have a lock on mutexes or something similar, then Process B is going to lock up quickly. The locks are the worst, but any resources that either thread still has at the time the fork() happens are lost (socket connection, memory allocations, device handle, etc.) This is where point (2) above comes in. You need to know your state before the fork(). If you have a very small number of threads or worker threads defined in one place and can easily stop all of them, then it will be easy enough.
If you are using the unix 'fork()' system call, then you are not technically using threads- you are using processes- they will have their own memory space, and therefore cannot interfere with eachother.
As long as each process uses different files, there should not be any issue.

Are locked pages inherited by pthreads?

I have a little paging problem on my realtime system, and wanted to know how exactly linux should behave in my particular case.
Among various other things, my application spawns 2 threads using pthread_create(), which operate on a set of shared buffers.
The first thread, let's call it A, reads data from a device, performs some calculations on it, and writes the results into one of the buffers.
Once that buffer is full, thread B will read all the results and send them to a PC via ethernet, while thread A writes into the next buffer.
I have noticed that each time thread A starts writing into a previously unused buffer, i miss some interrupts and lose data (there is an id in the header of each packet, and if that increments by more than one, i have missed interrupts).
So if i use n buffers, i get exactly n bursts of missed interrupts at the start of my data acquisition (therefore the problem is definitely caused by paging).
To fix this, i used mlock() and memset() on all of the buffers to make sure they are actually paged in.
This fixed my problem, but i was wondering where in my code would be the correct place do this. In my main application, or in one/both of the threads? (currently i do it in both threads)
According to the libc documentation (section 3.4.2 "Locked Memory Details"), memory locks are not inherited by child processes created using fork().
So what about pthreads? Do they behave the same way, or would they inherit those locks from my main process?
Some background information about my system, even though i don't think it matters in this particular case:
It is an embedded system powered by a SoC with a dual-core Cortex-A9 running Linux 4.1.22 with PREEMPT_RT.
The interrupt frequency is 4kHz
The thread priorities (as shown in htop) are -99 for the interrupt, -98 for thread A (both of which are higher than the standard priority of -51 for all other interrupts) and -2 for thread B
EDIT:
I have done some additional tests, calling my page locking function from different threads (and in main).
If i lock the pages in main(), and then try to lock them again in one of the threads, i would expect to see a large amount of page faults for main() but no page faults for the thread itself (because the pages should already be locked). However, htop tells a different story: i see a large amount of page faults (MINFLT column) for each and every thread that locks those pages.
To me, that would suggest that pthreads actually do have the same limitation as child processes spawned using fork(). And if this is the case, locking them in both threads (but not in main) would be the correct procedure.
Threads share the same memory management context. If a page is resident for one thread, it's resident for all threads in the same process.
The implication of this is that memory locking is per-process, not per-thread.
You are probably still seeing minor faults on the first write because a fault is used to mark the page dirty. You can avoid this by also writing to each page after locking.

c/c++ joining processes?

I am new to threads and processes.
I have code that works fine right now with forking the code into multiple processes. However each process needs to add to a global variable, but from what I read, each time the process forks, it takes a copy of the global, and adds them independently. Is there a way to join them, like you can with threads?
Different processes can communicate and exchange data via shared memory.
On linux, you can look:
man shm_overview
for attaching a memory segment on several processes
and
man sem_overview
for the semaphore library for controlling parallel access.
You should define a struct with two fields, one for your global and one for a semaphore. Then, before any forking occurs, create some shared memory in the parent process big enough to hold this struct and initialize one there. In the children, map in the shared memory so they can access the global. All processes, parent and children, should obey the rules of the semaphore when accessing the global.
To avoid unnecessary blocking which can hurt performance, try not to hold the semaphore too long. When reading the global, make a quick copy of it in a process and use that, rather than holding the semaphore for the entire time you are using its value. Likewise, when changing the global, prepare your changes ahead of time (before you grab the semaphore) and, once you have the semaphore, copy them in all at once. Sometimes your work depends on reading and writing the global without it changing in between being read and written. In this case, some blocking may be inevitable.
It is not clear what platform you are on, but all major PC and server platforms (Windows, Linux/Unix/Mac OS) have support for shared memory and semaphores. The APIs may be different, but the functionality you need is there.

Restarting threads from saved contexts

I am trying to implement a checkpointing scheme for multithreaded applications by using fork. I will take the checkpoint at a safe location such as a barrier. One thread will call fork to replicate the address space and signals will be sent to all other threads so that they can save their contexts and write it to a file.
The forked process will not run initially. Only when restart from checkpoint is required, a signal would be sent to it so it can start running. At that point, the threads who were not forked but whose contexts were saved, will be recreated from the saved contexts.
My first question is if it is enough to recreate threads from saved contexts and run them from there, if i assume there was no lock held, no signal pending during checkpoint etc... . Lastly, how a thread can be created to run from a known context.
What you want is not possible without major integration with the pthreads implementation. Internal thread structures will likely contain their own kernel-space thread ids, which will be different in the restored contexts.
It sounds to me like what you really want is forkall, which is non-trivial to implement. I don't think barriers are useful at all for what you're trying to accomplish. Asynchronous interruption and checkpointing is just as good as synchronized.
If you want to try hacking forkall into glibc, you should start out by looking at the setxid code NPTL uses for synchronizing setuid() calls between threads using signals. The same principle is what's needed to implement forkall, but you'd basically call setjmp instead of setuid in the signal handlers, and then longjmp back into them after making new threads in the child. After that you'd have to patch up the thread structures to have the right pid/tid values, free the excess new stacks that were created, etc.
Edit: Since the setxid code in glibc/NPTL is rather dense reading for someone not familiar with the codebase, you might instead look at the corresponding code I have in musl, called __synccall:
http://git.etalabs.net/cgi-bin/gitweb.cgi?p=musl;a=blob;f=src/thread/synccall.c;h=91ac5eb77322da7393f778da29d35fb3c2def15d;hb=HEAD
It uses a signal to synchronize all threads, then runs a callback sequentially in each thread one-by-one. To implement forkall, you'd want to do something like this prior to the fork, but instead of a callback, simply save jump buffers for each thread except the calling thread (you can't use a callback for this because the return would invalidate the jump buffer you just saved), then perform the fork from the calling thread. After that, you would make N new threads, and have them jump back to the old threads' saved jump buffers, and destroy their new (unneeded) stacks. You'd also need to make the right syscall to update their thread register (e.g. %gs on x86) and tid address.
Then you need to take these ideas and integrate them with glibc's thread allocation and thread stack cache framework. :-)

Why are threads called lightweight processes?

A thread is "lightweight" because most of the overhead has already been accomplished through the creation of its process.
I found this in one of the tutorials.
Can somebody elaborate what it exactly means?
The claim that threads are "lightweight" is - depending on the platform - not necessarily reliable.
An operating system thread has to support the execution of native code, e.g. written in C. So it has to provide a decent-sized stack, usually measured in megabytes. So if you started 1000 threads (perhaps in an attempt to support 1000 simultaneous connections to your server) you would have a memory requirement of 1 GB in your process before you even start to do any real work.
This is a real problem in highly scalable servers, so they don't use threads as if they were lightweight at all. They treat them as heavyweight resources. They might instead create a limited number of threads in a pool, and let them take work items from a queue.
As this means that the threads are long-lived and small in number, it might be better to use processes instead. That way you get address space isolation and there isn't really an issue with running out of resources.
In summary: be wary of "marketing" claims made on behalf of threads. Parallel processing is great (increasingly it's going to be essential), but threads are only one way of achieving it.
Process creation is "expensive", because it has to set up a complete new virtual memory space for the process with it's own address space. "expensive" means takes a lot of CPU time.
Threads don't need to do this, just change a few pointers around, so it's much "cheaper" than creating a process. The reason threads don't need this is because they run in the address space, and virtual memory of the parent process.
Every process must have at least one thread. So if you think about it, creating a process means creating the process AND creating a thread. Obviously, creating only a thread will take less time and work by the computer.
In addition, threads are "lightweight" because threads can interact without the need of inter-process communication. Switching between threads is "cheaper" than switching between processes (again, just moving some pointers around). And inter-process communication requires more expensive communication than threads.
Threads within a process share the same virtual memory space but each has a separate stack, and possibly "thread-local storage" if implemented. They are lightweight because a context switch is simply a case of switching the stack pointer and program counter and restoring other registers, wheras a process context switch involves switching the MMU context as well.
Moreover, communication between threads within a process is lightweight because they share an address space.
process:
process id
environment
folder
registers
stack
heap
file descriptor
shared libraries
instruments of interprocess communications (pipes, semaphores, queues, shared memory, etc.)
specific OS sources
thread:
stack
registers
attributes (for sheduler, like priority, policy, etc.)
specific thread data
specific OS sources
A process contains one or more threads in it and a thread can do anything a process can do. Also threads within a process share the same address space because of which cost of communication between threads is low as it is using the same code section, data section and OS resources, so these all features of thread makes it a "lightweight process".
Just because the threads share the common memory space. The memory allocated to the main thread will be shared by all other child threads.
Whereas in case of Process, the child process are in need to allocate the separate memory space.

Resources