I have a ressource manager that handles multiple TCP-Connections. These connections are pthreads. How can I manage it to send data from the Ressource Manager to all of these threads? Or even better: How can I figure out to which thread I have to send this command?
For example: I have 2 Threads, one with pid 3333, one with pid 4444. The user sends a task to program a board (it is a ressource manager that manages FPGA-boards). The ressource manager picks a board from a list, where also this pid is saved. Then the program-command should be send to the thread with this pid or, what I was thinking in the first place, to all of the threads and the threads decide if they go on or not. Protocol looks like this: <pid>#<board-id>#<file>
I open 2 pipes (for writing to the threads and reading from the threads) in the main.c and give them as an argument to the listening-thread (forthread-struct).
main.c
// open Pipes to SSL
int rmsslpipe[2];
int sslrmpipe[2];
if (pipe(rmsslpipe) == -1) {
writelog(LOGERROR, "main: could not create RM-SSL reading pipe");
exit(1);
}
if (pipe(sslrmpipe) == -1) {
writelog(LOGERROR, "main: could not create RM-SSL reading pipe");
exit(1);
}
int rmtosslserver = rmsslpipe[1];
int sslservertorm = sslrmpipe[0];
// start SSL-Server as a pthread
pthread_t thread;
forthread* ft = malloc(sizeof(forthread));
ft->rmtosslserver = rmsslpipe[0];
ft->sslservertorm = sslrmpipe[1];
pthread_mutex_t ftmutex;
pthread_mutex_init(&ftmutex, NULL);
ft->mutex = ftmutex;
pthread_create(&thread, NULL, startProgramserver, (void*) ft);
This thread now listens for new connections and if there is a new connection, it creates a new thread with the forthread-struct as argument. This thread is where the action happens :)
void* startProgramserver(void* ft) {
int sock, s;
forthread* f = (forthread*) ft;
// open TCP-Socket
sock = tcp_listen();
while(1){
if((s=accept(sock,0,0))<0) {
printf("Problem accepting");
// try again
sleep(60);
continue;
}
writelog(LOGNOTE, "New SSL-Connection accepted");
f->socket = s;
pthread_t thread;
pthread_create(&thread, NULL, serveClient, (void*) f);
}
exit(0);
}
This thread now initializes the connection, gets some information from the client and then waits for the ressource manager to get new commands.
n=read(f->rmtosslserver, bufw, BUFSIZZ);
But this fails if there is more than only one thread. So how can I manage that?
If you have one thread per board, the "pid" shouldn't be needed in the command -- you just need a way to find the right thread (or queue, or whatever) for the specified board.
You could keep a list of your forthread structures, and include the board ID in the structure. Also include a way of passing commands; this could be a pipe, but you may as well use some sort of queue or list instead. That way you use one pipe (or other mechanism) per thread instead of a single shared one, and can find the right one for each board by searching the forthread list for the one with the right board ID. Just be sure to protect any parts of the structure that may be modified while the thread runs with a mutex.
The problem with using a single pipe as you've suggested is that only one thread will get each command -- if it's the wrong one, too bad; the command is gone.
The answer is Yes. I would use a list of them.However I can open a pipe more than 1 when the the speed of the PC is very slow. 2 connections for 2 connections.
Related
I'm building a project in c language (using openwrt as OS) to upload files to FTP server. and i'm using MQTT for the incoming data. So for each topic i subscribe to, i save this data and then upload it to the FTP server and to keep things going smoothly, each time i need to upload a file i just use a thread to do the job.
and to make sure the program doesn't run too much threads, each topic is allowed to create one thread. and i'm using a variable (like mutex but it's not pthread_mutex_t because i don't need to block a thread, i want to skip that step and upload the next file). i though with this technique i'm safe but after running the program for 15 min i get this error 11 which said resource temporarily unavailable when the program try to create a thread (pthread_create).
one of my tries to figure out what could be the problem is.
i used the pthread_join() function which is not an option in my condition but just to make sure that each thread is finished and not running in permanent loop. the program was running over than an hour and the error didn't show up again. and of course each thread was finished as intended.
and i'm 90% sure that each topic is creating only on thread and the next one will be create only if the previous one finished. (i was tracking the status of the variable before and after the thread creation).
i set the max thread from here "/proc/sys/kernel/threads-max" to 2000 (2000 is more than enough since i don't have too much topics)
upload function (this will create the thread):
void uploadFile(<args...>, bool* locker_p){
*locker_p = true;
args->uploadLocker_p = uploadLocker_p;
<do something here>
pthread_t tid;
int error = pthread_create(&tid, NULL, uploadFileThread, (void*)args);
if(0 != error){
printf("Couldn't run thread,(%d) => %s\n", error, strerror(error));
}
else{
printf("Thread %d\n", tid);
}
}
upload thread:
void *uploadFileThread(void *arg){
typeArgs* args = (typeArgs*)arg;
<do something like upload the file>
*(args->uploadLocker_p) = false;
free(args);
return NULL;
//pthread_exit(0);
}
The default stack size for the created threads is eating too much virtual memory.
Essentially, the kernel is telling your process that it has so much virtual memory already in use that it doesn't dare give it any more, because there isn't enough RAM and swap to back it up if the process were to suddenly use it all.
To fix, create an attribute that limits the per-thread stack to something sensible. If your threads do not use arrays as local variables, or do deep recursion, then 2*PTHREAD_STACK_MIN (from <limits.h>) is a good size.
The attribute is not consumed by the pthread_create() call, it is just a configuration block, and you can use the same one for any number of threads you create, or create a new one for each thread.
Example:
pthread_attr_t attrs;
pthread_t tid;
int err;
pthread_attr_init(&attrs);
pthread_attr_setstacksize(&attrs, 2 * PTHREAD_STACK_MIN);
err = pthread_create(&tid, &attrs, uploadFileThread, (void *)args);
pthread_attr_destroy(&attrs);
if (err) {
/* Failed, errno in err; use strerror(err) */
} else {
/* Succeeded */
}
Also remember that if your uploadFileThread() allocates memory, it will not be freed automatically when the thread exits. It looks like OP already knows this (as they have the function free the argument structure when it's ready to exit), but I thought it a good idea to point it out.
Personally, I like to use a thread pool instead. The idea is that the upload workers are created beforehand, and they'll wait for a new job. Here is an example:
pthread_mutex_t workers_lock;
pthread_mutex_t workers_wait;
volatile struct work *workers_work;
volatile int workers_idle;
volatile sig_atomic_t workers_exit = 0;
where struct work is a singly-linked list protected by workers_lock, workers_idle is initialized to zero and incremented when waiting for new work, workers_wait is a condition variable signaled when new work arrives under workers_lock, and workers_exit is a counter that when nonzero, tells that many workers to exit.
A worker would be basically something along
void worker_do(struct work *job)
{
/* Whatever handling a struct job needs ... */
}
void *worker_function(void *payload __attribute__((unused)))
{
/* Grab the lock. */
pthread_mutex_lock(&workers_lock);
/* Job loop. */
while (!workers_exit) {
if (workers_work) {
/* Detach first work in chain. */
struct work *job = workers_work;
workers_work = job->next;
job->next = NULL;
/* Work is done without holding the mutex. */
pthread_mutex_unlock(&workers_lock);
worker_do(job);
pthread_mutex_lock(&workers_lock);
continue;
}
/* We're idle, holding the lock. Wait for new work. */
++workers_idle;
pthread_cond_wait(&workers_wait, &workers_lock);
--workers_idle;
}
/* This worker exits. */
--workers_exit;
pthread_mutex_unlock(&workers_lock);
return NULL;
}
The connection handling process can use idle_workers() to check the number of idle workers, and either grow the worker thread pool, or reject the connection as being too busy. The idle_workers() is something like
static inline int idle_workers(void)
{
int result;
pthread_mutex_lock(&workers_lock);
result = workers_idle;
pthread_mutex_unlock(&workers_lock);
return result;
}
Note that each worker only holds the lock for very short durations, so the idle_workers() call won't block for long. (pthread_cond_wait() atomically releases the lock when it starts waiting for a signal, and only returns after it re-acquires the lock.)
When waiting for a new connection in accept(), set the socket nonblocking and use poll() to wait for new connections. If the timeout passes, examine the number of workers, and reduce them if necessary by calling reduce_workers(1) or similar:
void reduce_workers(int number)
{
pthread_mutex_lock(&workers_lock);
if (workers_exit < number) {
workers_exit = number;
pthread_cond_broadcast(&workers_wait);
}
pthread_mutex_unlock(&workers_lock);
}
To avoid having to call pthread_join() for each thread – and we really don't even know which threads have exited here! – to reap/free the kernel and C library metadata related to the thread, the worker threads need to be detached. After creating a worker thread tid successfully, just call pthread_detach(tid);.
When a new connection arrives and it is determined to be one that should be delegated to the worker threads, you can, but do not have to, check the number of idle threads, create new worker threads, reject the upload, or just append the work to the queue, so that it will "eventually" be handled.
I would need some help with some C code.
Basically I have n processes which execute some code. Once they're almost done, I'd like the "Manager Process" (which is the main function) to send to each of the n processes an int variable, which may be different for every process.
My idea was to signal(handler_function, SIGALRM) once all processes started. When process is almost done, it uses kill(getpid(), SIGSTOP) in order to wait for the Manager Process.
After SIM_TIME seconds passed, handler_function sends int variable on a Message Queue then uses kill(process_pid, SIGCONT) in order to wake up waiting processes. Those processes, after being woken up should receive that int variable from Message Queue, print it and simply terminate, letting Manager Process take control again.
Here's some code:
/**
* Child Process creation using fork() system call
* Parent Process allocates and initializes necessary variables in shared memory
* Child Process executes Student Process code defined in childProcess function
*/
pid_t runChild(int index, int (*func)(int index))
{
pid_t pid;
pid = fork();
if (pid == -1)
{
printf(RED "Fork ERROR!\n" RESET);
exit(EXIT_FAILURE);
}
else if (pid == 0)
{
int res = func(index);
return getpid();
}
else
{
/*INSIGNIFICANT CODE*/
currentStudent = createStudent(pid);
currentStudent->status = FREE;
students[index] = *currentStudent;
currentGroup = createGroup(index);
addMember(currentStudent, currentGroup);
currentGroup->closed = FALSE;
groups[index] = *currentGroup;
return pid;
}
}
Code executed by each Process
/**
* Student Process Code
* Each Student executes this code
*/
int childProcess(int index)
{
/*NOTICE: showing only relevant part of code*/
printf("Process Index %d has almost done, waiting for manager!\n", index);
/* PROGRAM GETS STUCK HERE!*/
kill(getpid(), SIGSTOP);
/* mex variable is already defines, it's a struct implementing Message Queue message struct*/
receiveMessage(mexId, mex, getpid());
printf(GREEN "Student %d has received variable %d\n" RESET, getpid(), mex->variable);
}
Handler Function:
* Handler function
* Will be launched when SIM_TIME is reached
*/
void end_handler(int sig)
{
if (sig == SIGALRM)
{
usleep(150000);
printf(RED "Time's UP!\n" RESET);
printGroups();
for(int i = 0; i < POP_SIZE; i++){
mex->mtype = childPids[i];
mex->variable = generateInt(18, 30);
sendMessage(mexId, mex);
//childPids is an array containing PIDs of all previously launched processes
kill(childPids[i], SIGCONT);
}
}
I hope my code is understandable.
I have an issue though, Using provided code the entire program gets stuck at kill(getpid(), SIGSTOP) system call.
I also tried to launch ps in terminal and no active processes are detected.
I think handler_function doesn't send kill(childPids[i], SIGCONT) system call for some reason.
Any idea how to solve this problem?
Thank you
You might want to start by reading the manual page for mq_overview (man mq_overview). It provides a portable and flexible communication mechanism between processes which permits sync and async mechanisms to communicate.
In your approach, there is a general problem of “how does one process know if another is waiting”. If the process hasn’t stopped itself, the SIGCONT is ignored, and when it subsequently suspends itself, nobody will continue it.
In contrast, message-based communication between the two can be viewed as a little language. For simple exchanges (such as yours), the completeness of the grammar can be readily hand checked. For more complex ones, state machines or even nested state machines can be constructed to analyze their behaviour.
I am working on one project in which i need to read from 80 or more clients and then write their o/p into a file continuously and then read these new data for another task. My question is what should i use select or multithreading?
Also I tried to use multi threading using read/fgets and write/fputs call but as they are blocking calls and one operation can be performed at one time so it is not feasible. Any idea is much appreciated.
update 1: I have tried to implement the same using condition variable. I able to achieve this but it is writing and reading one at a time.When another client tried to write then it cannot able to write unless i quit from the 1st thread. I do not understand this. This should work now. What mistake i am doing?
Update 2: Thanks all .. I am able to succeeded to get this model implemented using mutex condition variable.
updated Code is as below:
**header file*******
char *mailbox ;
pthread_mutex_t lock = PTHREAD_MUTEX_INITIALIZER ;
pthread_cond_t writer = PTHREAD_COND_INITIALIZER;
int main(int argc,char *argv[])
{
pthread_t t1 , t2;
pthread_attr_t attr;
int fd, sock , *newfd;
struct sockaddr_in cliaddr;
socklen_t clilen;
void *read_file();
void *update_file();
//making a server socket
if((fd=make_server(atoi(argv[1])))==-1)
oops("Unable to make server",1)
//detaching threads
pthread_attr_init(&attr);
pthread_attr_setdetachstate(&attr,PTHREAD_CREATE_DETACHED);
///opening thread for reading
pthread_create(&t2,&attr,read_file,NULL);
while(1)
{
clilen = sizeof(cliaddr);
//accepting request
sock=accept(fd,(struct sockaddr *)&cliaddr,&clilen);
//error comparison against failire of request and INT
if(sock==-1 && errno != EINTR)
oops("accept",2)
else if ( sock ==-1 && errno == EINTR)
oops("Pressed INT",3)
newfd = (int *)malloc(sizeof(int));
*newfd = sock;
//creating thread per request
pthread_create(&t1,&attr,update_file,(void *)newfd);
}
free(newfd);
return 0;
}
void *read_file(void *m)
{
pthread_mutex_lock(&lock);
while(1)
{
printf("Waiting for lock.\n");
pthread_cond_wait(&writer,&lock);
printf("I am reading here.\n");
printf("%s",mailbox);
mailbox = NULL ;
pthread_cond_signal(&writer);
}
}
void *update_file(int *m)
{
int sock = *m;
int fs ;
int nread;
char buffer[BUFSIZ] ;
if((fs=open("database.txt",O_RDWR))==-1)
oops("Unable to open file",4)
while(1)
{
pthread_mutex_lock(&lock);
write(1,"Waiting to get writer lock.\n",29);
if(mailbox != NULL)
pthread_cond_wait(&writer,&lock);
lseek(fs,0,SEEK_END);
printf("Reading from socket.\n");
nread=read(sock,buffer,BUFSIZ);
printf("Writing in file.\n");
write(fs,buffer,nread);
mailbox = buffer ;
pthread_cond_signal(&writer);
pthread_mutex_unlock(&lock);
}
close(fs);
}
I think for the the networking portion of things, either thread-per-client or multiplexed single-threaded would work fine.
As for the disk I/O, you are right that disk I/O operations are blocking operations, and if your data throughput is high enough (and/or your hard drive is slow enough), they can slow down your network operations if the disk I/O is done synchronously.
If that is an actual problem for you (and you should measure first to verify that it really is a problem; no point complicating things if you don't need to), the first thing I would try to ameliorate the problem would be to make your file's output-buffer larger by calling setbuffer. With a large enough buffer, it may be possible for the C runtime library to hide any latency caused by disk access.
If larger buffers aren't sufficient, the next thing I'd try is creating one or more threads dedicated to reading and/or writing data. That is, when your network thread wants to save data to disk, rather than calling fputs()/write() directly, it allocates a buffer containing the data it wants written, and passes that buffer to the IO-write thread via a (mutex-protected or lockless) FIFO queue. The I/O thread then pops that buffer out of the queue, writes the data to the disk, and frees the buffer. The I/O thread can afford to be occasionally slow in writing because no other threads are blocked waiting for the writes to complete. Threaded reading from disk is a little more complex, but basically the IO-read thread would fill up one or more buffers of in-memory data for the network thread to drain; and whenever the network thread drained some of the data out of the buffer, it would signal the IO-read thread to refill the buffer up to the top again. That way (ideally) there is always plenty of input-data already present in RAM whenever the network thread needs to send some to a client.
Note that the multithreaded method above is a bit tricky to get right, since it involves inter-thread synchronization and communication; so don't do it unless there isn't any simpler alternative that will suffice.
Either select/poll or multithreading is ok if you you program solves the problem.
I' guess your program would be io-bound as the number of clients grows up, as you have disk read/write frequently. So it would not speed up to have multiple threads doing the io operation. Polling may be a better choice then
You can set a socket that you get from accept to be non-blocking. Then it is easy to use select to find out when there is data, read the number of bytes that are available and process them.
With (only) 80 clients, I see no reason to expect any significant difference from using threads unless you get very different amounts of data from different clients.
I have a tcp chat program: server.c and client.c.
The server is in a while(1) loop and uses select to detect clients wanting to connect on it's socket. A new thread is then created for the accepted client and its socket descriptor is given as an argument for thread: pthread_create (&thread,NULL, do_something, (void *) &socket_descriptor);
When receiving a message from a client, the server should send this message to all connected clients. (not implemented this yet).
Now I'm wondering how to do this. I absolutely need each accepted connection to be in a thread.
I was thinking of using a select inside the do_something as well; will select detect if data is incoming on the socket descriptor? Or would you do it another way?
edit: added code
my code:
#include <pthread.h>
#include <stdio.h>
#include <stdlib.h>
#include <unistd.h>
#include <fcntl.h>
#include <errno.h>
#include <string.h>
#include "tcp_comm.h"
#include <sys/time.h>
#include <sys/types.h>
#define BUFSIZE 1024
#define PORT 1234
void *do_something(void *a);
int main (void){
Socket server = tcp_passive_open( PORT );
MySocket *s = (MySocket *)server;
printf("Server socked_id (main): %i", s->sd);
pthread_t thread;
fd_set active_socketDescriptors,read_socketDescriptors;
FD_ZERO(&active_socketDescriptors);
FD_SET(s->sd,&active_socketDescriptors);
while (1){
read_socketDescriptors = active_socketDescriptors;
if (select (FD_SETSIZE, &read_socketDescriptors, NULL, NULL, NULL) < 0){
perror ("select");
exit (EXIT_FAILURE);
}
int i;
for (i = 0; i < FD_SETSIZE; ++i){
if (FD_ISSET (i, &read_socketDescriptors)){
if (i == s->sd){
Socket client = tcp_wait_for_connection( server );
pthread_create (&thread,NULL, do_something, (void *)client);
FD_SET (s->sd, &active_socketDescriptors);
} else {
}
}
}
}
tcp_close( server );
return 0;
}
void *do_something(void *client){
unsigned char input[BUFFER_SIZE];
pthread_detach(pthread_self());
MySocket *s = (MySocket *)client;
printf("Client socked_id (thread): %i", s->sd);
int j;
while (1){
int nbytes = tcp_receive(client, input, BUFSIZE );
if (nbytes <= 0){
if (nbytes ==0){
/* connection closed by client*/
printf("Client closed connection");
} else {
/* other error*/
perror("tcp_receive");
}
tcp_close(&client);
/*remove the socket descriptor from set in the main BRAINSTORM ABOUT THIS */
} else {
/*data incoming */
printf("\nMessage from client: %s",input);
}
}
return 0;
}
edit 2: reformulation of problem
I have to use threads (it not because of the system; linux) but because it's mandatory in the assignment to have a thread for each client.
The problem i have specifically is that only the main thread can send the data recieved in each thread from each client to all clients because only the main thread has access to the set which contains the socket descriptors.
edit3: what I need to add in each thread but I can't because of the s.thread and s.main being in different places & the thread not knowing the set of the main.
for (j=0; j<=FD_SETSIZE;j++){
if(FD_ISSET(j,&active_socketDescriptors)){
if (j != s.thead && j!=s.main){
tcp_send(j, (void*)input,nbytes);
}
}
}
edit 4: I solved it this way:
i have a dynamic array list where i put a list of connected clients with there socket descriptor. Inside the thread of the server (do something) I have the recieve blocking until it gets input then this input is send to all connected clients using there socket descriptor from the list which it loops trough. Inside the clients there is a thread listening and a thread sending.
If the client connection sockets are non-blocking, then using e.g. select to wait for the socket receive data is a possible way. However, since you already have the connected sockets in threads, you can keep them blocking, and just do a read call on them. The call to read will block until you receive data, which can then be spread to the other threads.
Edit
After better understanding your requirements, you should probably have the sockets non-blocking, and use a loop with select with a short timeout. When select timeouts (i.e. returns 0) then you check if there is data to send. If there is, then send the data, and go back to the select call.
Given your description it might be worth rethinking the architecture of your application. (Unless this has been dictated by limitations on your system). Let me explain this a little more...
By your description, if I understood you correctly, after a client has connected to the server any messages it (the client) sends will be relayed (by the server) to all other clients. So, rather than creating a new thread why not simply add the newly connected socket to the FDSET of the select. Then when a message comes in you can simply relay to the others.
If you expect a large number of clients for a single server you should see if the poll system call is available on your system (it's just like select but supports monitoring more clients). A good poll/select version ought to out-perform your threaded version.
If you really want to continue with your threaded version here's one way to accomplish what you are trying to do. When you create the thread for each accepted client you also create a pipe back to the server thread (and you add this to the server select/poll set.) and pass that to the client thread. So your server thread now not only receives new connections but relays the messages too.
Although you said that you absolutely must deal with each client in a separate thread, unless you are using a real time operating system, you will probably find that the thread context-switch/synchronization you need to do will soon dominate over the multiplexing overhead of the first solution I suggested. (But since you did not mention an OS I am guessing!)
This is related to your design.
If you only need to do one or two features for each connected client, then suggest you to use only one thread to implement your server.
If you has to do lots of features for each connected client, then multiple thread design is okay.
However, the question you asked should be how did I passing the data from receiving thread to all others. The suggested answer from me is ether:
a) use message queue to passing inter thread data: each thread has one message queue and each thread will listen to its own socket and this message queue. When receiving data from socket, the thread sending the data to all other message queues
b) use an single global buffer: if has any incoming data form socket, put this data into this global buffer and adding a tag to this data indicating where this data comes from.
my 2 cents.
gcc (GCC) 4.6.3
c89
Hello,
I am just wondering if this is the best way to handle worker/background threads created by main?
I am doing this right? this is the first time I have done any multiple threading programs. Just want to make sure I am on the right track, as this will have to be extended to add more threads.
I have one thread for sending a message and another for receiving the message.
Many thanks for any suggestions,
int main(void)
{
pthread_t thread_send;
pthread_t thread_recv;
int status = TRUE;
/* Start thread that will send a message */
if(pthread_create(&thread_send, NULL, thread_send_fd, NULL) == -1) {
fprintf(stderr, "Failed to create thread, reason [ %s ]",
strerror(errno));
status = FALSE;
}
if(status != FALSE) {
/* Thread send started ok - join with the main thread when its work is done */
pthread_join(thread_send, NULL);
/* Start thread to receive messages */
if(pthread_create(&thread_recv, NULL, thread_receive_fd, NULL) == -1) {
fprintf(stderr, "Failed to create thread for receiving, reason [ %s ]",
strerror(errno));
status = FALSE;
/* Cancel the thread send if it is still running as the thread receive failed to start */
if(pthread_cancel(thread_send) != 0) {
fprintf(stderr, "Failed to cancel thread for sending, reason [ %s ]",
strerror(errno));
}
}
}
if(status != FALSE) {
/* Thread receive started ok - join with the main thread when its work is done */
pthread_join(thread_recv, NULL);
}
return 0;
}
Example of a worker/background thread to send a message, example only
void *thread_send_fd()
{
/* Send the messages when done exit */
pthread_exit(NULL);
}
The only time when this kind of construct might be justified is if there is only ever one message exchanged and, even then, there may be some problems.
If messages are to be exchanged continually during the app run, it's more usual to write both threads as loops and never terminate them. This means no continual create/terminate/destroy overhead and no deadlock-generator, (AKA join). It does have a downside - it means that you have to get involved with signals, queues and the like for inter-thread comms, but this is going to happen anyway if you write many multithreaded apps.
Either way, it's usual to start the rx thread first. If you start the tx thread first, there is a possibility that rx data will be retuned and discarded before the rx thread starts.
Is this done once per message? It seems like the call creates a thread to send 1 message and another thread to wait for 1 response. Then the call, and I'm assuming the entire program, just waits for the whole thing to finish. Assuming the receiver cannot do anything until the sender finishes sending, this does absolutely nothing to improve the real or perceived performance of your program. Now to be precise, we would need to know what the sender and the receiver are really doing before we can tell for sure if there is any benefit from this. For any benefit at all, the sender thread and the receiver thread would have to have work that they can do simultaneously....not serially. If the intent is to not make the program wait for the send and the receive transaction, then this does not do that at all.