Message passing between multiple processes each with many threads in c - c

I have an setup with multiple (roughly 32) processes each with 2 threads. I would like to send a message from thread 0 of process A to thread 1 of process B. So, should the message be sent specifically to the thread id or to the process id. If the message is sent to the process, by default which thread will service the message?

There are lots of ways possible. Just search for IPC. For example, you could use shared memory, synchronized by a set of semaphores.

Related

Is it possible to call shmget() in two thread inside one process ?

I have one process which has two thread main thread and event thread ,can i use shared memory between two different thread in a single process for sending the data from one thread to Other ?
If I am using shmget() in two different thread in same process ,one shmget() get the shmid and other return with -1 and failed ,so need to know either its a valid scenario or not ?

C - using pthread and waiting for a return value

I am currently working on a multi-client server that uses select() to handle multiple clients. However, when a client sends a message that needs heavy calculations, I have to create a a new thread using pthread_create() so that my server can remain responsive to other messages from clients. Once the calculation is done for that client, I need to be able to return a message to the client. But I am not sure how I can know if that thread is finished and how to get it's final result. Obviously I cant use pthread_join() as that blocks my server program while running that new thread. So does C offer a function that I can use to get the end result of that child thread? I would like to avoid using Global Variables as well.
You can just check if the thread has finished before joining it from the main thread (which will be non blocking)
You should get how to do it from here : How do you query a pthread to see if it is still running?
Otherwise you can probably just send back the answer from the child thread, you can pass connection information as parameter of the thread function.
If you want the child thread to wake up the thread that is waiting in select() when it has finished processing, you can use pipe() to create a pipe. The thread calling select() adds the read side of the pipe to its file descriptor set, and the child thread writes to the write side of the pipe when it has finished its work.
You can even have it send the result over the pipe, if the result isn't too large.

Can many threads send over a single ØMQ socket when mutexes are used?

The documentation of ØMQ mentions:
Individual ØMQ sockets are not thread safe except in the case where full memory barriers are issued when migrating a socket from one thread to another.
What exactly is meant by "full memory barriers?" Can I have multiple threads send over the same ØMQ socket if I synchronize this with mutexes?
As Ulrich has said, yes you can synchronise access to a single thread using mutexes, but really, why would you want to do that?
It's normally considered good practice to only access a socket from a single thread, and synchronise between threads using messages. Something like this:
Worker thread 1
\
Worker thread 2 - > Control thread -> msg out
/
Worker thread 3
where only the control thread can send messages directly over the socket. Messages from the worker threads would be sent to the control thread over an inproc zmq socket that you would create. The control thread would process just one message at a time which avoids the need for the mutexes, provided the workers have no shared state.
Message based designs are easier to implement and debug, and much easier to maintain than designs using mutexes. If you can change the design to do that, I'd advise doing so.
Acquiring a mutex implies a memory barrier. This basically means that write operations must not be reordered in a way that they cross this operation. Summary: Yes, use a mutex to protect access to the ZMQ socket and you're fine.

TASK_UNINTERRUPTIBLE and process threads in linux kernel development using C

I have a running process which has created multiple user mode threads. If the kernel changes the state of the process to TASK_UNINTERRUPTIBLE (or TASK_INTERRUPTIBLE) do the threads created by the process automatically get suspended?
This is not a homework question. I'm reading an operating systems book which describes how a semaphore is implemented. In their implementation the semaphore struct maintains a linked list of processes currently waiting for the semaphore. From what I've learned so far, such a semaphore could only be used to synchronize processes, not threads. Correct? The threads in the linked list are put into a TASK_INTERRUPTIBLE state until the semaphore is available, at which point one process is woken up by setting its state to TASK_RUNNING.
In Linux each thread is a separate task running within a process scope. See /proc/self/task/. They are even created with the same kernel function as a new process. Threads in Linux originated as "lightweight processes".
Each task has a unique task id (tid), similar to the process id (pid) and indeed the master thread (the one executing main()) has the same tid as the process pid.
The only functional difference in Linux between threads and processes is that all threads (tasks) share all process resources apart from
scheduling parameters (includes TASK_UNINTERRUPTIBLE, TASK_INTERRUPTIBLE)
stack
task id
the main() thread identifies the process
So TASK_INTERRUPTIBLE can be applied to each thread individually.
As such semaphores are perfectly valid to use for synchronising threads. In this case if one thread blocks on a semaphore, it's jus that one thread.

Synchronize two processes using two different states

I am trying to work out a way to synchronize two processes which share data.
Basically I have two processes linked using shared memory. I need process A to set some data in the shared memory area, then process B to read that data and act on it.
The sequence of events I am looking to have is:
B blocks waiting for data available signal
A writes data
A signals data available
B reads data
B blocks waiting for data not available signal
A signals data not available
All goes back to the beginning.
In other terms, B would block until it got a "1" signal, get the data, then block again until that signal went to "0".
I have managed to emulate it OK using purely shared memory, but either I block using a while loop which consumes 100% of CPU time, or I use a while loop with a nanosleep in it which sometimes misses some of the signals.
I have tried using semaphores, but I can only find a way to wait for a zero, not for a one, and trying to use two semaphores just didn't work. I don't think semaphores are the way to go.
There will be numerous processes all accessing the same shared memory area, and all processes need to be notified when that shared memory has been modified.
It's basically trying to emulate a hardware data and control bus, where events are edge rather than level triggered. It's the transitions between states I am interested in, rather than the states themselves.
So, any ideas or thoughts?
Linux has its own eventfd(2) facility that you can incorporate into your normal poll/select loop. You can pass eventfd file descriptor from process to process through a UNIX socket the usual way, or just inherit it with fork(2).
Edit 0:
After re-reading the question I think one of your options is signals and process groups: start your "listening" processes under the same process group (setpgid(2)), then signal them all with negative pid argument to kill(2) or sigqueue(2). Again, Linux provides signalfd(2) for polling and avoiding slow signal trampolines.
If 2 processes are involved you can use a file , shared memory or even networking to pass the flag or signal. But if the processes are more, there may be some suitable solutions in modifying the kernel. There is one shared memory in your question, right ?! How the signals are passed now ?!
In linux, all POSIX control structures (mutex, conditions, read-write-locks, semaphores) have an option such that they also can be used between processes if they reside in shared memory. For the process that you describe a classic mutex/condition pair seem to fit the job well. Look into the man pages of the ..._init functions for these structures.
Linux has other proper utilities such as "futex" to handle this even more efficiently. But these are probably not the right tools to start with.
1 Single Reader & Single Writer
1 Single Reader & Single Writer
This can be implemented using semaphores.
In posix semaphore api, you have sem_wait() which will wait until value of the semaphore count is zero once it is incremented using sem_post from other process the wait will finish.
In this case you have to use 2 semaphores for synchronization.
process 1 (reader)
sem_wait(sem1);
.......
sem_post(sem2);
process 2(writer)
sem_wait(sem2);
.......
sem_post(sem1);
In this way you can achieve synchronization in shared memory.

Resources