I am combining GTK+, WinAPI and Winsock to create a graphical client-server interface, a waiting room. nUsers is a variable that determines the number of clients successfully connected.
Within a Windows thread, created by:
CreateThread(NULL, 0, action_to_users, NULL, 0, NULL);
I use the do-nothing while loop so that it freezes until a user connects.
while(!nUsers);
However, it never passes through the loop as if nUsers never becomes > 0.nUsers counts the number of clients connected properly, as I constantly monitor it and use it in a variety of different functions.
To prove my point, something even stranger happens.
If I make the loop
while(!nUsers) { printf("(%i)\n", nUsers); }
To spam the console with text printed out (doesn't matter what text as long as it is not an empty string) it works as intended.
What could be possibly going on here...
Regarding the original problem: compiler is free to cache the value of nUsers since the variable is not modified within this loop. Marking the variable volatile prevents this optimization as described here.
Regarding what you're trying to achieve - it looks like a producer-consumer pattern, where the thread(s) handling the sockets are producers and your GUI thread is a consumer. You can slow down your consumer loop to only loop when new data is available using:
semaphores as showcased here - producer thread increments count on semaphore while consumer decrements it upon dequeing work item.
Events like here - producer thread signals an event while consumer thread waits for it to become signalled. You can queue the work in some queue to allow more than one item being processed.
Condition variables (XP+) - here a variable you're waiting for gets signalled upon meeting certain criteria.
Related
I have this code:
int _break=0;
while(_break==0) {
if(someCondition) {
//...
if(someOtherCondition)_break=1;//exit the loop
//...
}
}
The problem is that if someCondition is false, the loop gets heavy on the CPU. Is there a way to sleep for some milliseconds in the loop so that the cpu will not have a huge load?
Update
What I'm trying to do is a server-client application, without using sockets, just using shared memory, semaphores and system calls. I'm doing this on linux.
someOtherCondition becomes true when the applications receives the "kill" signal, while someCondition is true if the message received is valid. If it's not valid, it keeps waiting for a valid message and the while loop becomes a heavy infinite loop (it works but loads the CPU too much). I would like to make it lightweight.
I'm working on Linux (Debian 7).
If you have a single-threaded application, then it won't make any difference whether you suspend the execution or not.
If you have multiple threads running, then you should use a binary semaphore instead of polling a global variable.
This thread should acquire the semaphore at the beginning of each iteration, and one of the other threads should release the semaphore whenever you wish this thread to run.
This method is also known as "consumer-producer".
When a thread attempts to acquire a binary semaphore:
If the semaphore is released, then the calling thread acquires it and continues the execution.
If the semaphore is already acquired, then the calling thread "asks" the OS to block itself, and the OS will unblock it as soon as some other thread releases the semaphore.
The entire procedure is "atomic", i.e., no context-switch between threads can take place while the semaphore code is executed. This is generally achieved by disabling the interrupts. Everything is implemented within the semaphore code, so you need not "worry" about it.
Since you did not specify what OS you're using, I cannot provide any technical details (i.e., code)...
UPDATE:
If you are trying to protect a critical section inside the loop (i.e., if you are accessing some other global variable, which is also being accessed by other threads, and at least one of those threads is changing that global variable), then you should use a Mutex instead of a binary semaphore.
There are two advantages for using a Mutex in this case:
It can be released only by the thread which has acquired it (thus ensuring mutual exclusion).
It can resolve a specific type of deadlocks that occur when a high-priority thread is waiting for a low-priority thread to complete, while a medium-priority thread is preventing the low-priority thread from completing (a.k.a. priority-inversion).
Of course, a Mutex is required only if you really need to ensure mutual exclusion for accessing the data.
UPDATE #2:
Now that you've added some specific details on your system, here is the general scheme:
Step #1 - Before starting your threads:
// Declare a global variable 'sem'
// Initialize the global variable 'sem' with 'count = 0' (i.e., as acquired)
Step #2 - In this thread:
// Declare the global variable 'sem' as 'extern'
while(1)
{
semget(&sem);
//...
}
Step #3 - In the Rx ISR:
// Declare the global variable 'sem' as 'extern'
semset(&sem);
Spinning a loop without any delay will use a fair amount of CPU, a small time delay will reduce that you're right.
Using Sleep() is the easiest way, in Windows this is in the windows.h header.
Having said that, the most elegant solution would be to thread your code so that the code is only ever run when your condition is true, that way it will truly sleep until you wake it up.
I suggest you look into pthread and mutex. This will allow you to sleep that loop of yours entirely until the condition becomes true.
Hope that helps in some way :)
I have an application that waits for clients to connect. Each time a client connects, a new frame gets created (with the new socket file descriptor). I know how many clients will connect, after I reach that number I just run pthread_join in a for loop.
My problem is that I would like the main thread to control all the other threads. My goal is to have each thread send the same message back to the client, at the same time, and only once. There are multiple messages a thread can send.
My current thinking is to define a list of command, as follows:
char *commands[] = {
(char*) "TERMINATE\0",
.... };
And then specify a command number that represents which command to use in that char* array. All threads will do something like
write(sockfd, buffer[commandNumber], length[commandNumber]);
I thought about waiting on a condition variable, but I see two problems:
1) I want to make sure that each thread, although synchronized, execute the command only once.
2) The main thread that initiates the command has to know when all those threads is done executing the command.
Only way I see to execute 2) is to keep track of a counter (with mutexes), and when each thread executes the command, it can increase that counter. I am not sure I will be able to avoid a thread from running the command twice.
What is the best possible way please to coordinate multiple threads to execute a single action at once; and also be able to know when that action has finished executing for every thread please?
You might use a barrier to gate the operation.
Synchronizing the send
The main thread initializes a barrier named "Ready" to N+1. Then it begins accept()ing N client connections, spawning a worker thread for each. The new worker threads immediately wait on barrier "Ready".
After spawning the Nth (and last) worker, the main thread sets the desired command (perhaps using a global commandNumber). Then the main thread waits on barrier "Ready". As soon as all workers and the main thread have arrived (reaching the barrier's limit of N+1), all threads are released, knowing that they are ready to issue their command immediately.
(A common alternate approach is to use a predicate and condition variable rather than a barrier. For example, the main thread might spawn the Nth worker and then cond_broadcast() that it has set a flag ready = 1. This approach is flawed. The main thread cannot know that the Nth worker — or, indeed, any of the workers — are yet waiting on that condition. The barrier solves this problem.)
Indicating completion
Another N+1 barrier, "AllDone", could be used to indicate that the workers are all done. A semaphore initialized to -N and posted by workers would do the same. Having the workers close() their connections and the main thread select()ing or poll()ing connections would convey the same information, too.
I am sorry for the basicness of this question, but I am having an issue here. I have a client-server program. I don't know before hand how many connections will come but they are not infinite. And at the end , after all connections are closed some results are output. But the problem I am having is, accepting connections is in an infinite while loop, how is it stoppedd to output the result.
Thanks
you need to have some form of condition to break out of you loop, in your case, a timeout would probably work the best, basically meaning, if you don't get any new clients for x seconds, you stop looking for clients, same goes for any for of connection error.
Anything more requires looking at the code you are using.
Handling EINTR on error from accept(2) with terminating the program and hitting ^C usually works.
You could install a handler for the SIGTERM signal which would set a global volatile sig_atomic_t variable, and test that variable in your multiplexing loop (probably around poll or select). Remember that signal handlers cannot call many functions (only the async-signal-safe ones).
Catching nicely SIGTERM is expected from most Linux or Posix servers.
You could consider using an event handling library like libev, libevent etc.
Although my background is with Windows NT the function "names" are ones that name generic threading or process functions that should be available in any multi-threading environment.
If the main thread can determine when the child thread in question should terminate it can either do this by having the child thread loop on a boolean - such as "terminate_conditon" - or by terminating the thread throught its handle.
// child thread
terminate_condition=FALSE;
while (!terminate_condition)
{
// accept connections
}
child_thread_done=TRUE;
// output results
exit_thread ();
// main thread
child_thread_done=FALSE;
child_thread=create_thread (...);
// monitor connections to determine when done
terminate_condition=TRUE;
while (!child_thread_done)
{
sleep (1);
}
// or maybe output results here?
exit_process ();
This controlled termination solution requires that only one thread writes to the child_thread_done boolean and that any other thread only reads.
Or
// child thread
while (1)
{
// accept connections
}
// main thread
child_thread=create_thread (...);
// monitor connections to determine when done
kill_thread (child_thread);
// output results
exit_process ();
The second form is messier since it simply kills the child thread. In general it is better to have the child thread perform a controlled termination, especially if it has allocated resources (which become the responsibility of the process as a whole rather than just the allocating thread).
If there are many child threads working with connections a synchronized termination mechanism is necessary: either a struct with as many members as there are child threads (a terminating thread sets its "terminated" boolean to true, terminates and the main thread monitors the struct to make sure all child "terminated" booleans are true before proceeding) or a counter containing the number of child threads operating (when a child is about to terminate it takes exclusive control of the counter via a spinlock, decrements it and frees the lock before terminating: the main thread doesn't do anything before the counter contains zero).
I wrote a simple program that implements master/worker scheme where the master is the main thread, and workers are created by it.
The main thread writes something to a shared buffer, and the worker threads read this shared buffer, writing and reading to shared buffer are organized by read/write lock.
Unfortunately, this scheme definitely leads to starvation of main thread, since a single write has to wait on several reads to complete. One possible solution is increasing the priority of the master thread, so if it wants to write something, it will get immediate access to the shared buffer.
According to a great post to a similar issue, I discovered that probably manipulating the priority of a thread under SCHED_OTHER policy is not allowed, what can be changed is the nice value only.
I wrote a procedure to give worker threads lower priority than master thread, but it seems not to work correctly.
void assignWorkerThreadPriority(pthread_t* worker)
{
struct sched_param* worker_sched_param = (struct sched_param*)malloc(sizeof(struct sched_param));
worker_sched_param->sched_priority =0; //any value other than 0 gives error?
int policy = SCHED_OTHER;
pthread_setschedparam(*worker, policy, worker_sched_param);
printf("Result of changing priority is: %d - %s\n", errno, strerror(errno));
}
I have a two-fold question:
How can I set the nice value of a worker threads to avoid main thread starvation.
If not possible, then how can I change the scheduling policy to a one that allows changing the priority.
Edit: I managed to run the program using other policies, such as SCHED_FIFO, all I had to do was running the program as a super user
You cannot avoid problems using a read/write lock when the read and write usage is so even. You need a different method. You need a lock-free message queue or independent work queues or one of many other techniques.
Here is another way to do the job, the way I would do it. The worker can take the buffer away and work on it rather than keeping it shared:
Write thread:
Create work item.
Lock the mutex or CriticalSection protecting the current queue and pointer to queue.
Add work item to queue.
Release the lock.
Optionally signal a condition variable or Event. Another option is for worker threads to check for work on a timer.
Worker thread:
Create a new queue.
Wait for a condition variable or event or other signal, or wait on a timer.
Lock the mutex or CriticalSection protecting the current queue and pointer to queue.
Set the current queue pointer to the new queue.
Release the lock.
Proceed to work on the now private queue.
Delete the queue when all work items complete.
Now write thread creates more work items. When all the worker threads have their own copies of a queue to work on it will be able to write many items in peace.
You can modify this. For example, a worker thread may lock the queue and move a limited number of work items off into its own internal queue instead of taking the whole thing.
I'm running a multi-threaded C program (process?) , making use of semaphores & pthreads. The threads keep interacting, blocking, waking & printing prompts on stdout continuously, without any human intervention. I want to be able to exit this process (gracefully after printing a message & putting down all threads, not via a crude CTRL+C SIGINT) by pressing a keyboard character like #.
What are my options for getting such an input from the user?
What more relevant information could I provide that will help to solve this problem?
Edit:
All your answers sound interesting, but my primary question remains. How do I get user input, when I don't know which thread is currently executing? Also, semaphore blocking using sem_wait() breaks if signalled via SIGINT, which may cause a deadlock.
There is no difference in reading standard input from threads except if more than one thread is trying to read it at the same time. Most likely your threads are not all calling functions to read standard input all the time, though.
If you regularly need to read input from the user you might want to have one thread that just reads this input and then sets flags or posts events to other threads based on this input.
If the kill character is the only thing you want or if this is just going to be used for debugging then what you probably want to do is occasionally poll for new data on standard input. You can do this either by setting up standard input as non-blocking and try to read from it occasionally. If reads return 0 characters read then no keys were pressed. This method has some problems, though. I've never used stdio.h functions on a FILE * after having set the underlying file descriptor (an int) to non-blocking, but suspect that they may act odd. You could avoid the use of the stdio functions and use read to avoid this. There is still an issue I read about once where the block/non-block flag could be changed by another process if you forked and exec-ed a new program that had access to a version of that file descriptor. I'm not sure if this is a problem on all systems. Nonblocking mode can be set or cleared with a 'fcntl' call.
But you could use one of the polling functions with a very small (0) timeout to see if there is data ready. The poll system call is probably the simplest, but there is also select. Various operating systems have other polling functions.
#include <poll.h>
...
/* return 0 if no data is available on stdin.
> 0 if there is data ready
< 0 if there is an error
*/
int poll_stdin(void) {
struct pollfd pfd = { .fd = 0, .events = POLLIN };
/* Since we only ask for POLLIN we assume that that was the only thing that
* the kernel would have put in pfd.revents */
return = poll(&pfd, 1, 0);
}
You can call this function within one of your threads until and as long as it retuns 0 you just keep on going. When it returns a positive number then you need to read a character from stdin to see what that was. Note that if you are using the stdio functions on stdin elsewhere there could actually be other characters already buffered up in front of the new character. poll tells you that the operating system has something new for you, not what C's stdio has.
If you are regularly reading from standard input in other threads then things just get messy. I'm assuming you aren't doing that (because if you are and it works correctly you probably wouldn't be asking this question).
You would have a thread listening for keyboard input, and then it would join() the other threads when receiving # as input.
Another way is to trap SIGINT and use it to handle the shutdown of your application.
The way I would do it is to keep a global int "should_die" or something, whose range is 0 or 1, and another global int "died," which keeps track of the number of threads terminated. should_die and died are both initially zero. You'll also need two semaphores to provide mutex around the globals.
At a certain point, a thread checks the should_die variable (after acquiring the mutex, of course). If it should die, it acquires the died_mutex, ups the died count, releases the died_mutex, and dies.
The main initial thread periodically wakes up, checks that the number of threads that have died is less than the number of threads, and goes back to sleep. The main thread dies when all the other threads have checked in.
If the main thread doesn't spawn all the threads itself, a small modification would be to have "threads_alive" instead of "died". threads_alive is incremented when a thread forks, and decremented when the thread dies.
In general, terminating a multithreaded operation cleanly is a pain in the butt, and besides special cases where you can use things like the semaphore barrier design pattern, this is the best I've heard of. I'd love to hear it if you find a better, cleaner one.
~anjruu
In general, I have threads waiting on a set of events and one of those events is the termination event.
In the main thread, when I have triggered the termination event, I then wait on all the threads having exited.
SIGINT is actually not that difficult to handle and is often used for graceful termination. You need a signal handler and a way to tell all the threads that it's time to stop. One global flag that threads check in their loops and the signal handler sets might do. Same approach works for "on user command" termination, though you need a way to get the input from the terminal - either poll in a dedicated thread, or again, set the terminal to generate a signal for you.
The tricky part is to unblock waiting threads. You have to carefully design the notification protocol of who tells who to stop and what they need to do - put dummy message into a queue, set a flag and signal a cv, etc.