So I'm doing the barbershop problem but now I want to make it 'visual' and network-friendly. When the server starts up, awaits for clients to connect, each time a client connects it draws a 'P' that travels around the screen up to the barber position. I do this with NCurses and I have no problem with that.
However I only managed to draw one client (one 'P') at a time. I'd like to see many clients ('P's) in the screen. Because the way I have it now I only use 1 chair at a time, the client then gets attended and exits, then the next client in the accept() queue enters the screen and so on. It gives the impression that there is no actual concurrence thought there is.
I have a very extense code but the fork/socket part is here:
pid_t client;
int p;
RandSeed=8;
listen(connection,90);
while(1){
remote_dir_size = sizeof(remote_dir);
//"Awaiting connection...
if((connection_client=accept(connection,(struct sockaddr *)&remote_dir,&remote_dir_size))<0){
console_write("CONNECTION REJECTED!");
exit(-1);
}
//"Connection accepted!
client=fork();
switch(client)
{
case -1:
console_write("Error Forking!!");
exit(1);
case 0:
close(connection); //So that another client can come.
recvs = recv(connection_client,petition,sizeof(petition),0);
//console_write(petition);
// console_write(" moviendose.");
move_client();
//Check for avialable chairs
//waiting_client_count++;
sem_wait(waitingRoom); //wait until available
move_client_to_chairs();
sitting_client_count++;
redraw_chairs(); //redraw chair <--Useless since only 1 chair is used at a time :/
//waiting for barber
sem_wait(barberChair);
//barber available, chair occupied is now free
sem_post(waitingRoom);
sitting_client_count--;
redraw_chairs();
move_client_to_barber(); //Move 'P' towards barber chair
sit_client_barber();
//Wake barber
sem_post(barberPillow);
//Wait until barber ends
sem_wait(seatBelt);
//release chair
sem_post(barberChair);
exit_client();
exit(0);
default:
//barber sleeps until someone wakes him
sem_wait(barberPillow);
randwait(5);
//barber cutting hair
randwait(5);
//barber finished
//free client
sem_post(seatBelt);
wait(&p);
}
}
Complete version of the code is here
My problem is:
Server starts up good. Then when I run a ./client the server screen starts drawing the P along the screen and it moves it correctly, the client gets its haircut and exits. But when I run 2 or more ./clients the server screen draws the process once at a time, there's only one client at a time inside the barbershop; as if it's waiting for that one client to exit() in order to start the next forked process.
I find this very odd, what am I missing here? Is it a problem of accept queue? Should I try a different perspective?
The problem is that your parent process is trying to do two different things that need waiting:
"barber sleeps until someone wakes him"
accept()
Without some kind of joint waiting, this architecture does not work. Both pieces of code sleeps waiting for a single kind of event, disregarding the other type of event. For example, when a client comes in, the program will not accept another client until "barber finished".
There are two directions to solve the problem:
Join the two kinds of waiting: for accept(), you can turn the socket into non-blocking mode, and use select() or poll() to wait until an event comes in, for the semaphore, this is not so trivial, but if you open a socket between the parent and each child, and turn the semaphore handling into network communications, you can have a single select() or poll() waiting for all the sockets (one for receiving new client connections, and one per client for communicating with them) at a single place. You'd also have to redesign the semaphore handling - and get rid of the sleep there, something like keeping a list of active timers, and go back to select() or poll() until the next timer times out (their last parameter can put a limit on the maximum wait time).
Separate the accept() and the semaphore handling into two
processes, ie the parent is handling the new incoming connections,
and a child process "simulates" the barber. Actually, the way you
tried to implement the "barber", you'd need one child process per
barber, because the process is sleeping while simulating the job
done by the barber. Or you redesign the "barber" simulation as mentioned in the first part.
Related
I am working on improving the performance of a network application written in C running on linux systems.
The program as it is written now it reads a packet from a socket interface, it does some processing on it and then it adds it to a send queue.
I am pretty new to multi threading programming but I am familiar with the basic concepts (mutex, conditional signals etc).
I am trying to implement a solution where a set of worker threads are passed what is read from the interface and they do the work that follows.
My question is how could I ensure that, if first thread reads the first packet and the second thread reads the second packet, the order in which the packets are added to the send queue are in the same order as read.
There are lots of ways to solve this. Different ways have different trade offs. Things to consider are if you want to have a static number of worker threads, how many worker threads, and how perfect you want the solution to be.
If all worker threads are to receive their data packets directly via a call to read or recv then:
pthread_mutex_lock(&the_mutex);
do
{
read_size = read(sock, buf, buf_size);
if (read_size > 0)
{
my_count = ++packet_counter;
break;
} else
{
// figure out how to handle different failures here
}
} while (1);
pthread_mutex_unlock(&the_mutex);
results = do_work(buf, read_size);
enqueue_results(my_count, results);
Would work, where enqueue_results() would put the results into a priority queue that can handle wrapping around of the key (which isn't that difficult to do since you just order by last_sent_count-this_count rather than using this_count directly for the queue ordering).
Then another thread would needs to wait on the next reply to be sent to become ready and send that.
You could get a lot fancier, but you should give this a try.
Just code what you want. Outbound packets can be inserted into a queue in order, and the sender can wait if the packet it needs to send next isn't at the head of the queue.
When you say you add it to the send queue, does another thread that you have control over remove it from this send queue to later send or is this outside your control. If the former you can use a priority queue for the second queue instead of a traditional first in first out queue. If the key to the priority queue is some global counter, the sender will always pull the smallest/next value. Now if that smallest value isn't the next value you can have the thread pulling from the sender thread wait for the next value. Depending on the priority queue implementation you can also just peek into the queue to see the next value and then conditionally wait until another insertion into the queue.
The program could use separate receive and send queues for each thread. The receiving thread would queue packets to the processing threads in round robin order. Each processing thread would dequeue a packet from its receive queue, process the packet and queue the processed packet to its send queue. The sending thread would dequeue processed packets from the processed send queues in round robin order.
I am writing a simple multi-client server communication program using POSIX threads in C. I am creating a thread every time a new client is connected, i.e. after the accept(...) routine in main().
I have put the accept(...) and the pthread_create(...) inside a while(1) loop, so that server continues to accept clients forever. Now, where should I write the pthread_join(...) routine after a thread exits.
More Info: Inside the thread's "start routine", I have used poll() & then recv() functions, again inside a while(1) loop to continuously poll for availability of client and receive the data from client, respectively. The thread exits in following cases:
1) Either poll() returns some error event or client hangs up.
2) recv() returns a value <= 0.
Language: C
Platform: Suse Linux Enterprise Server 10.3 (x86_64)
First up starting a new thread for each client is probably wasteful and surely won't scale very far. You should try a design where a thread handles more than one client (i.e. calls poll on more than one socket). Indeed, that's what poll(2), epoll etc were designed for.
That being said, in this design you likely needn't join the threads at all. You're not mentioning any reason why the main thread would need information from a thread that finished. Put another way, there's no need for joining.
Just set them as "detached" (pthread_detach or pthread_attr_setdetachstate) and they will be cleaned up automatically when their function returns.
The problem is that pthread_join blocks the calling thread until the thread exits. This means you can't really call it and hope the thread have exited as then the main thread will not be able to do anything else until the thread have exited.
One solution is that each child thread have a flag that is polled by the main thread, and the child thread set that flag just before exiting. When the main thread notices the flag being set, it can join the child thread.
Another possible solution, is if you have e.g. pthread_tryjoin_np (which you should have since you're on a Linux system). Then the main thread in its loop can simply try to join all the child threads in a non-blocking way.
Yet another solution may be to detach the child threads. Then they will run by themselves and do not need to be joined.
Ah, the ol' clean shutdown problem.
Assuming that you may want to cleanly disconnect the server from all clients under some circumstance or other, your main thread will have to tell the client threads that they're to disconnect. So how could this be done?
One way would be to have a pipe (one per client thread) between the main thread and client thread. The client thread includes the file descriptor for that pipe in its call to poll(). That way the main thread can easily send a command to the client thread, telling it to terminate. The client thread reads the command when poll() tells it that the pipe has become ready for reading.
So your main thread can then send some sort of command through the pipe to the client thread and then call pthread_join() waiting for the client thread to tidy itself up and terminate.
Similarly another pipe (again one per client thread) can be used by the client thread to send information to the main thread. Instead of being stuck in a call to accept(), the main thread can be using poll() to wait for a new client connection and for messages from the existing client threads. A timeout on poll() also allows the main thread to do something periodically.
Welcome to the world of the actor model of concurrent programming.
Of course, if you don't need a clean shut down then you can just let threads terminate as and when they want to, and just ctrl c the program to close it...
As other people have said getting the balance of work per thread is important for efficient scaling.
I have to receive data from 15 different clients each of them sending on 5 different ports. totally 15 *5 sockets.
for each client port no is defined and fixed. example client 1 ,ports 3001 to 3005. client 2 ,ports 3051 to 3055 etc. They have one thing in common say first port (3001 , 3051) is used to send commands. other ports send some data.
After receiving the data i have to check for checksum. keep track of recvd packets, Re request the packet if lost and also have to write to files on hard disk.
Restriction I cannot change the above design and i cannot change from UDP to TCP.
The two methods i'm aware of after reading are
asynchronous multiplexing using select().
Thread per socket.
I tried the first one and i'm stuck at the point when i get the data. I'm able to receive data. I have some processing to do so i want to start a thread for each socket (or) for sockets to handle (say all first ports, all second, etc ..i.e.3001,3051 etc)
But here if client sends any data then FD_ISSET becomes true , so if i start a thread ,then it becomes thread for every message.
Question:
How to add thread code here, Say if i include pthread_create inside if(FD_ISSET .. ) then for every message that i receive i create a thread. But i wanted a thread per socket.
while(1)
{
int nready=0;
read_set = active_set;
if((nready = select(fdmax+1,&read_set,NULL,NULL,NULL)) == -1)
{
printf("Select Errpr\n");
perror("select");
exit(EXIT_FAILURE);
}
printf("number of ready desc=%d\n",nready);
for(index=1;index <= 15*5;index++)
{
if(FD_ISSET(sock_fd[index],&read_fd_set))
{
rc = recvfrom(sock_fd[index],clientmsgInfo,MSG_SIZE,0,
(struct sockaddr *)&client_sockaddr_in,
&sockaddr_in_length);
if(rc < 0)
printf("socket %d down\n",sock_fd[index]);
printf("Recieved packet from %s: %d\nData: %s\n\n", inet_ntoa(client_sockaddr_in.sin_addr), ntohs(client_sockaddr_in.sin_port), recv_client_message);
}
} //for
} //while
create the threads at the startup of program and divide them to handle data, commmands e.t.c.
how?
1. lets say you created 2 threads, one for data and another for the commands.
2. make them sleep in the thread handler or let them wait on a lock that the main thread
acquired, seems to be that mainthread got two locks one for each of them.
3. when any client data or command that got into the recvfrom at mainthread, depending on the
type of the buffer(data, commands), copy the buffer into the shared data by mainthread and
other threads and unlock the mutex.
4. at threads lock the mutex so that mainthread wont' corrupt the data and once processing is
done at the threads unlock and sleep.
The better one would be to have a queue, that fills up by main thread and can be accessed element wise by the other threads.
I assume that each client context is independent of the others, ie. one client socket group can be managed on its own, and the data pulled from the sockets can be processed alone.
You express two possibilities of handling the problem:
Asynchronous multiplexing: in this setting, the sockets are all managed by one single thread. This threads selects which socket must be read next, and pulls data out of it
Thread per socket: in this scenario, you have as many threads as there are sockets, or more probably group of sockets, ie. clients - this the interpretation I will build from.
In both cases, threads must keep ownership of their respective resources, meaning sockets. If you start moving sockets around between threads, you will make things more difficult that it needs to be.
Outside the work that needs to be done, you will need to handle thread management:
How do threads get started?
How and when are they stopped?
What are the error handling policies?
Your question doesn't cover these issues, but they might play a significant role in your final design.
Scenario (2) seems simpler: you have one main "template" (I use the word in a general meaning here) for handling a group of sockets using select on them, and in the same thread receive and process the data. It's quite straightforward to implement, with a struct to contain the context specific data (socket ports, pointer to function for packet processing), and a single function looping on select and process, plus perhaps some other checks for errors and thread life management.
Scenario (1) requires a different setup: one I/O thread reads all the packets and pass them on to specialized worker threads to do the processing. If processing error occurs, worker threads will have to generate the adhoc packet to be sent to the client, and pass it to the I/O thread for sending. You will need packet queues both ways to allow communication between I/O and workers, and have the I/O thread check the worker queues somehow for resend requests. So this solution is a bit more expensive in terms of developement, but reduce the I/O contention to one single point. It's also more flexible, in case some processing must be done against data coming from several clients, or if you want to chain up processing somehow. For instance, you could have instead one thread per client socket, and then one other thread per client group of socket further down the work pipeline, with each step of the pipeline interconnected by packet queue.
A blend of both solution can of course be implemented, with one IO thread per client, and pipelined worker threads.
The advantage of both outlined solutions is the fixed number of threads: no need to spawn and destroy threads on demand (although you could design a thread pool to handle that as well).
For a solution involving moving sockets between threads, the questions are:
When should these resources be passed on? What happens after a worker thread has read a packet? Should it return the socket to the IO thread, or risk a blocking read on the socket for the next packet? If it does a select to poll the socket for more packets, we fall in scenario (2), where each client will has its own I/O thread when there is network trafic from all of them, in which case what is the gain of the initial I/O thread doing the select?
If it passes the socket back, should the IO thread wait for all workers to give back their socket before initiating another select? If it waits, it takes the risk of making unserved client wait for packets already in the network buffers, inducing processing lag. If it does not wait, and return to select to avoid lag on unserved sockets, then the served ones will have to wait for the next wake up to see their sockets back in the select pool.
As you can see, the problem is difficult to handle. That's the reason why I recommend exclusive sockets ownership by threads as described in scenarii (1) and (2).
Your solution requires a fixed, relatively small, number of connections.
Create a help procedure that creates thread procedures that listen to each of the five ports and block on the recvfrom(), process the data, and block again. You can then call the helper 15 times to create the threads.
This avoids all polling, and allows Linux to schedule each thread when the I/O completes. No CPU used while waiting, and this can scale to somewhat larger solutions.
If you need to scale massively, why not use a single set of ports, and get the partner address from the client_sockaddr_in structure. If the processing takes a material amount of time, you could extend it by keeping a pool of threads available and assign a new one each time a message is received and continue processing the message thereafter, and adding the thread back to the pool after the response is sent.
I'm implementing a system that runs game servers. I have a process (the "game controller") that creates two pipe pairs and forks a child. The child dups its STDIN to one pipe, and dups its STDOUT and STDERR to the other, then runs execlp() to run the game code.
The game controller will have two threads. The first will block in accept() on a named UNIX socket receiving input from another application, and the second thread is blocking read()ing the out and error pipe from the game server.
Occasionally, the first thread will receive a command to send a string to the stdin pipe of the game server. At this point, somehow I need to stop the second thread from read()ing so that the first thread can read the reply from the out and error pipe.
(It is worth noting that I will know how many characters/lines long the reply is, so I will know when to stop reading and let the second thread resume reading, resetting the process.)
How can I temporarily switch the read control to another thread, as above?
There are a couple of options. One would be to have the second thread handle all of the reading, and give the first thread a way to signal it to tell it to pass the input back. But this will be somewhat complicated; you will need to set up a method for signalling between the threads, and make sure that the first thread tells the second thread that it wants the input before the second thread reads it and handles it itself. There will be potential for various kinds of race conditions that could make your code unpredictable.
Another option is to avoid using threads at all. Just use select(2) (or poll(2)) to allow one thread to wait for activity on several file descriptors at once. select allows you to indicate the set of file descriptors that you are interested in. As soon as any activity happens on one of them (a connection is available to accept, data is available to read), select will return, indicating the set of file descriptors that are ready. You can then accept or read on the appropriate descriptors, and when you are done, loop and call select again to wait for the next I/O event.
I have wrote a server program that can handle multiple client by using fork(). I have a signal handler, error checking where needed, and everything works correctly. I have it set up where if a client enter 'quit', the server should stop accepting connections, let the opened clients finish their communication, and close once all clients are closed. To do this, whenever 'quit' is entered, I have an int flag that is set to 0. Since the variables in each child process are only for that process, and do not affect the other child processes or the parent process, I can't keep track of when the flag is set to 0. In my signal handler I am checking
if( count == 0 && flag == 0)
//close server and exit program
count is the number of clients opened which after they are all closed will obviously be zero ( this has been error checked to make sure it is correct). Does anyone know of a way I can create flag that can be set inside one client, and have that same value for every client? If that makes sense.. I am coding in C by the way.
You need to implement a single server for everyone to speak to, he will broadcast back to all the clients to update them with the client count, but in essence without a lot of complication to speak and keep track of all the other clients is a bit of a challenge.
This would also be a place to route and traffic messages between different clients.
Hopefully this shines some light.
The child process receiving the quit command needs to send a signal (such as SIGUSR1 -- I don't recommend SIGTERM for this purpose unless you are going to be closing out all the clients as soon as possible) to its parent, and let the parent set the flag in its memory space in that signal handler. (The parent can let everyone know its pid by just storing the results of getpid() somewhere.)
There are many IPC mechanisms that can do this... in particular semaphores come to mind. You could create a set of two semaphores, and use one for your child count and one for your quit flag. When you start your program, the client count would be initialized to zero, and the quit flag would be initialized to one. When each child is started, it should add one to the client count... and subtract one when it exits. When a client receives the quit command, it would zero the quit flag. The main program will know when everything is done when both semaphores reach zero.
Or (if possible) you could just start threads instead of forking processes, and use two global ints for count and flag and a global mutex to protect them. The pthreads API is fairly easy to use for simple things where not a lot of inter-thread communication needs to occur, and creating a thread is faster than fork+exec.