I am implementing a multi cast server that sends a message every X amount of seconds to a multicast address.
I am also part of the multicast group and I will also receive messages from other senders in that group.
My question is, can I use sleep(X) to send my message while still receiving other messages from the group and process them? Or does sleep() block?
Sleep blocks all execution, but only in the thread from which you call it. I would suggest that you create two threads, one for broadcasting and one for listening. Then make sure that you synchronize any data shared between the threads with Mutexes.
When you call sleep(), only the calling thread gets suspended. All other threads will continue running, so you can continue receiving the data on the concurrently running threads.
Yes, sleep is blocking. You haven't said how you're implementing the server, but if it's in terms of a select loop, you should use the timeout argument to select, together with gettimeofday or clock_gettime and some arithmetic to determine when the next time you should send a message is, whether you've passed that time, and if not, how long until the time is up (which you can use for the select timeout). The timeradd, timersub, and timercmp macros can help with that.
Related
I am coding a game-server that allows up to 1100 concurrent connections using thread-per-connection approach. Every time a login packet is read from the client socket I want to be able to give it 5 seconds to connect, otherwise gracefully the connection and release the thread to the pool.
I know about alarm() for sending the process a SIGALRM, but which thread receives the signal is undefined behavior. I also tried the setitimer function, but it also sends the signal to the process. Blocking the signal in all threads but ours is impossible because I need to get the signals in all 5 threads.
Is there any way of doing this without changing the entire server architecture?
Note: This is not a personal project, so changing the thread-per-connection model is not an option, please consider these answers out-of-topic.
Threads and signals don't mix well, for the reasons you found out -- it's indeterminate which thread will receive the signal.
A better way to get a timeout within a thread is to set the socket to non-blocking mode and then run a while-loop around select() and recv(). Use the timeout argument to select() to ensure that select() will wake up at the end of your 5-second deadline, pass your socket in as part of the read-fd_set argument, and keep in mind that if the connection is TCP, the data from your socket may arrive in multiple small chunks (hence the while-loop, to collect all of them into a buffer).
Hello everyone i have a question about timeouts in c so i ask you guys.
So i'm making a server application in C that uses POSIX threads to accept multiple simpultenious connections but implementing timeouts was harder than i expected as i read the message (HTTP requests) in parts first the start line than the headers etc, etc, and i initialy used select() to detect if the socket was ready for reading but that way if the client sends the start line only than the server will continue waiting for the headers and body without ever timing out so what i did is i put all the code that reads the message in one function and i wan't to implement a timeout for the entire function, say if the function doesnt return in x seconds than a timeout function is called and the thread is exited...
[Things that i have tried]
putting multiple select calls (one for every socket read) but that ended up in a mess of having to calculate remaining time for each operation.
i didn't actually try to use an alarm signal as i've heard that signals effect the entire process and not a specific thread that would cause one time out to timeout every parallel connection..
thanx in advance.. B)
There is no proper way to terminate a thread function other than letting it finish.
Every attempt to finish a thread from the outside could lead to resource (mostly but not only memory) leaks, state variables in nondeterministic state, and so. Please don't do it. Never. The normal way of terminating a thread function from the outside is to make it listen to some means of inter thread communication (which can be a sync object, a volatile variable or even a message loop), and exit the function core when it is necessary. Normally you would realize it by having a single test in the cycle condition of the thread if it is looping or testing before every long-running operation inside your thread.
Now if you store the timestamp of the function start and test at every cycle condition/long-running test if currenttimestamp > timestamp + timeout, you can exit from inside your thread and voilá; your problem is solved.
This is kind of generic question - however I met this problem several times already and I still haven't found the best possible solution.
Let's imagine you have program (e.g. HTTP application server) that is multithreaded and that communicates over sockets (TCP, Unix, ...). Main thread is using asynchronous IO and select() or poll() POSIX calls to dispatch traffic from/to sockets. There are also worker threads that process requests and provides responses. To send response back to the client, worker thread synchronises with main thread (that polls) 'somehow'. Core of the questions is 'how' - in terms of what is efficient. I can use pipe() - socket based IPC mechanism - but this seems to me as quite huge overhead. I tend to use some pthread IPC techniques like mutex, condition variables etc. … but these will not work with select() or poll().
Is there a common technique in POSIX (and surroundings) that address this conflict?
I guess on Windows there is WaitForMultipleObjects() function that allows that.
Example program is crafted to illustrate an issue, I know that I can design master/worker pattern in a different way but this is not what I'm asking for. I have other cases where I'm in the same situation.
You could use a signal to poke the worker thread, which will interrupt the select() call and return EINTR. This gets even easier to do with pselect().
For this to work:
decide on a signal (or allocate a real-time signal)
attach an empty handler function to it (if the signal were ignored, the system call would be automatically restarted)
block the signal, at least in the worker thread.
use the signal mask argument in pselect() to unblock the signal while waiting.
Between threads, you can use pthread_kill to deliver the signal to the worker thread specifically. When another process should send the signal, you can either make sure the signal is blocked in all but the worker thread (so it will be delivered there), or use the signal handler to find out whether the signal was sent to the worker thread, and use pthread_kill to forward it explicitly (the worker thread still doesn't need to do anything in the signal handler).
Due to laziness on my part, I don't have a source code viewer online, but you can clone the LibreVISA git tree, and take a look at src/messagepump.cpp, where this method is used to poke the worker thread after another thread added a file descriptor to the watch list.
Simon Richthers answer is v good.
Another alternative might be to make main thread only responsible for listening for new connections and starting up a worker thread with the connection information so that the worker is responsible for all subsequent ‘transactions’ from this source.
My understanding is:
Main thread uses select.
Worker threads processes requests forwarded to it by main thread.
So need to synchronize between workers and main thread e.g. when
worker finishes a transaction need to send response back to main
thread which in turn forwards the response back to the source.
Why don't you remove the problem of having to synchronize between the worker thread and the main thread by making the worker thread responsible for all transactions from a particular connection?
Thus the main thread is only responsible for listening for new connections and starting up a worker thread with the connection information i.e. the file descriptor for the new connection.
First of all, the way to wake another thread is to use the pthread_cond_wait / pthread_cond_timedwait calls in thread A to wait, and for thread B to use pthread_cond_broadcast / pthread_cond_signal to pick it up. So, for instance if B is a producer and A is the consumer, the producer might add items to a linked list protected with a mutex. There would be an associated conditional variable such that after the addition of the item, it could wake thread B such that it went to see if any new items had arrived on the list, and if so removed them. I say 'associated' as then the same mutex can be associated with the condition variable as protects the list.
So far so good. Now you mention asynchronous I/O. What I've wanted to do several times is select() or poll() on a set of FDs and a set of condition variables, so the select(), poll() is interrupted when the condition variable is broadcasted to. There is no easy way of doing this directly; you cannot simply mix and match.
You thus need to do one of two things. Either:
work around the problem (for instance, use a self-connected pipe() to send one byte to wake the select() up either instead of the condition variable, as well as the condition variable, or from some additional thread waiting on the condition variable; or
convert to a more threaded model. IE use one thread for sending, one thread for receiving, and use a producer / consumer model, so the sender thread simply removes from a list / buffer and sends (blocking if necessary), and the received waits for I/O (blocking if necessary) and adds it to the list (this is what you put in italics at the end).
The second is a major design change for those of us brought up on asynchronous I/O, and the first is ugly. You are not the first to be dismayed by this, but I've not found an easy way around it. Re the first an inefficiency, if you only write one character to wake the select loop to the self-pipe, I don't think you are going to see too much inefficiency.
I am programming a http server. There is the main daemon spawning a bunch of listeners, which are threads or processes, depending on user settings. Upon creation of a listener, the socket descriptor is passed to it, and its job is just to listen for connections (duh). A semaphore is wrapping the call to listen as to avoid the thundering herd effect.
My problem is how to quit the server. In this situation, where the listeners are blocked on a semaphore, how does the daemon is going to tell them to close? The daemon can't just kill them, maybe someone is responding to a request...
I want to keep the design as simple as possible, but I can't find a solution to this problem.
Here are some ugly workaround:
Set a timeout for the semaphore. Wake up. Should I close? No? Ok, back to sleep;
Just kill them;
Array of booleans in shared memory, meaning responding/blocked, the daemon kills accordingly. The best so far, but not so simple.
What do you say?
Thanks.
A clean way to solve this problem is to make each listener wait on two semaphores. The first one it the current one you now use, and a second one, that when become signaled, means it's time to quit. I believe your system is linux since you used the term daemon. The function select does just that - waits on multiple objects (file-descriptors like), and returns when one of them becomes signaled. You also know from the function which one got signaled, so here is your solution.
On Windows the function is WaitForMultipleObjects()
Send a SIGTERM or, if you prefer, SIGUSR to children and implement handling of this signal so that they finish current request and exit gracefully.
If they wait on semaphore, you should use interruptible mode so that receiving a signal will wake them up.
In the past I've used a global that client handling threads could use to find out if they need to 'clean up shop' and then waited on them to all finish but I'd also be interested to know if there's an even better way. (Not sure what language but in most, you can check to see if your thread is still running.)
Looks like linux doesnt implement pthread_suspend and continue, but I really need em.
I have tried cond_wait, but it is too slow. The work being threaded mostly executes in 50us but occasionally executes upwards of 500ms. The problem with cond_wait is two-fold. The mutex locking is taking comparable times to the micro second executions and I don't need locking. Second, I have many worker threads and I don't really want to make N condition variables when they need to be woken up.
I know exactly which thread is waiting for which work and could just pthread_continue that thread. A thread knows when there is no more work and can easily pthread_suspend itself. This would use no locking, avoid the stampede, and be faster. Problem is....no pthread_suspend or _continue.
Any ideas?
Make the thread wait for a specific signal.
Use pthread_sigmask and sigwait.
Have the threads block on a pipe read. Then dispatch the data through the pipe. The threads will awaken as a result of the arrival of the data they need to process. If the data is very large, just send a pointer through the pipe.
If specific data needs to go to specific threads you need one pipe per thread. If any thread can process any data, then all threads can block on the same pipe and they will awaken round robin.
It seems to me that such a solution (that is, using "pthread_suspend" and "pthread_continue") is inevitably racy.
An arbitrary amount of time can elapse between the worker thread finishing work and deciding to suspend itself, and the suspend actually happening. If the main thread decides during that time that that worker thread should be working again, the "continue" will have no effect and the worker thread will suspend itself regardless.
(Note that this doesn't apply to methods of suspending that allow the "continue" to be queued, like the sigwait() and read() methods mentioned in other answers).
May be try an option of pthread_cancel but be careful if any locks to be released,Read the man page to identify cancel state
Why do you care which thread does the work? It sounds like you designed yourself into a corner and now you need a trick to get yourself out of it. If you let whatever thread happened to already be running do the work, you wouldn't need this trick, and you would need fewer context switches as well.