General Race Condition - c

I am new to C and wanted to know about race conditions. I found this on the internet and it asked to find the race condition, and a solution to it.
My analysis is that the race condition is in the create-thread() method has the race condition, specifically in the if-else statement. So when the method is being accessed another thread could be created or removed during the check-and-act and the thread_amt would be off.
In order to not have the race condition, then lock the if-else using mutex, semaphores, etc?
Can anyone correct me if I am wrong, and could possibly show me how to implement mutex?
#define MAXT 255
int threads_amt = 0;
int create-thread() // create a new thread
{
int tid;
if (threads_amt == MAXT) return -1;
else
{
threads_amt++;
return tid;
}
}
void release-thread()
{
/* release thread resources */
--threads_amt;
}

Yeah, the race condition in this case happens because you have no guarantee that the checking and the manipulation of threads_amt are going to happen with no interruption/execution of another thread.
Three solutions off the top of my head:
1) Force mutual exclusion to that part of code using a binary semaphore (or mutex) to protect the if-else part.
2) Use a semaphore with initial value MAXT, and then, upon calling create_thread (mind, you can't use hyphens in function names!), use "wait()" (depending on the type of semaphore, it could have different names (such as sem_wait())). After that, create the thread. When calling release_thread(), simply use "signal()" (sem_post(), when using semaphore.h).
3) This is more of an "hardware" solution: you could assume that you are given an atomic function that performs the entire if-else part, and therefore avoids any race condition problem.
Of these solutions, the "easiest" one (based on the code you already have) is the first one.
Let's use semaphore.h's semaphores:
#define MAXT 255
// Global semaphore
sem_t s;
int threads_amt = 0;
int main () {
...
sem_init (&s, 0, 1); // init semaphore (initial value = 1)
...
}
int create_thread() // create a new thread
{
int tid;
sem_wait(&s);
if (threads_amt == MAXT) {
sem_post(&s); // the semaphore is now available
return -1;
}
else
{
threads_amt++;
sem_post(&s); // the semaphore is now available
return tid;
}
}
void release_thread()
{
/* release thread resources */
sem_wait(&s);
--threads_amt;
sem_post(&s);
}
This should work just fine.
I hope it's clear. If it's not, I suggest that you study how semaphores work (use the web, or buy some Operating System book). Also, you mentioned that you are new to C: IMHO you should start with something easier than this: semaphores aren't exactly the next thing you want to learn after the 'hello world' ;-)

The race condition is not in the if() statements.
It is with access to the variable threads_amt that is potentially changed and accessed at the same time in multiple threads.
Essentially, any thread that modifies the variable must have exclusive access to avoid a race condition. That means all code which modifies the variable or reads its value must be synchronised (e.g. grab a mutex first, release after). Readers don't necessarily need exclusive access (e.g. two threads reading at the same time won't necessarily affect each other) but writers do (so avoid reading a value while trying to change it in another thread) - such considerations can be opportunities to use synchronisation methods other than a mutex - for example, semaphores.
To use a mutex, it is necessary to create it first (e.g. during project startup). Then grab it when needed, and remember to release it when done. Every function should minimise the time that it holds the mutex, since other threads trying to grab the mutex will be forced to wait.
The trick is to make the grabbing and releasing of the mutex unconditional, wherever it occurs (i.e. avoid a function that grabs the mutex, being able to return without releasing it). That depends on how you structure each function.
The actual code for implementing depends on which threading library you're using (so you need to read the documentation) but the concepts are the same. All threading libraries have functions for creating, grabbing (or entering), and releasing mutexes, semaphores, etc etc.

Related

How to use sched_yield() properly?

For an assignment, I need to use sched_yield() to synchronize threads. I understand a mutex lock/conditional variables would be much more effective, but I am not allowed to use those.
The only functions we are allowed to use are sched_yield(), pthread_create(), and pthread_join(). We cannot use mutexes, locks, semaphores, or any type of shared variable.
I know sched_yield() is supposed to relinquish access to the thread so another thread can run. So it should move the thread it executes on to the back of the running queue.
The code below is supposed to print 'abc' in order and then the newline after all three threads have executed. I looped sched_yield() in functions b() and c() because it wasn't working as I expected, but I'm pretty sure all that is doing is delaying the printing because a function is running so many times, not because sched_yield() is working.
The server it needs to run on has 16 CPUs. I saw somewhere that sched_yield() may immediately assign the thread to a new CPU.
Essentially I'm unsure of how, using only sched_yield(), to synchronize these threads given everything I could find and troubleshoot with online.
#include <stdio.h>
#include <pthread.h>
#include <stdlib.h>
#include <sched.h>
void* a(void*);
void* b(void*);
void* c(void*);
int main( void ){
pthread_t a_id, b_id, c_id;
pthread_create(&a_id, NULL, a, NULL);
pthread_create(&b_id, NULL, b, NULL);
pthread_create(&c_id, NULL, c, NULL);
pthread_join(a_id, NULL);
pthread_join(b_id, NULL);
pthread_join(c_id, NULL);
printf("\n");
return 0;
}
void* a(void* ret){
printf("a");
return ret;
}
void* b(void* ret){
for(int i = 0; i < 10; i++){
sched_yield();
}
printf("b");
return ret;
}
void* c(void* ret){
for(int i = 0; i < 100; i++){
sched_yield();
}
printf("c");
return ret;
}
There's 4 cases:
a) the scheduler doesn't use multiplexing (e.g. doesn't use "round robin" but uses "highest priority thread that can run does run", or "earliest deadline first", or ...) and sched_yield() does nothing.
b) the scheduler does use multiplexing in theory, but you have more CPUs than threads so the multiplexing doesn't actually happen, and sched_yield() does nothing. Note: With 16 CPUs and 2 threads, this is likely what you'd get for "default scheduling policy" on an OS like Linux - the sched_yield() just does a "Hrm, no other thread I could use this CPU for, so I guess the calling thread can keep using the same CPU!").
c) the scheduler does use multiplexing and there's more threads than CPUs, but to improve performance (avoid task switches) the scheduler designer decided that sched_yield() does nothing.
d) sched_yield() does cause a task switch (yielding the CPU to some other task), but that is not enough to do any kind of synchronization on its own (e.g. you'd need an atomic variable or something for the actual synchronization - maybe like "while( atomic_variable_not_set_by_other_thread ) { sched_yield(); }). Note that with an atomic variable (introduced in C11) it'd work without sched_yield() - the sched_yield() (if it does anything) merely makes busy waiting less awful/wasteful.
Essentially I'm unsure of how, using only sched_yield(), to
synchronize these threads given everything I could find and
troubleshoot with online.
That would be because sched_yield() is not well suited to the task. As I wrote in comments, sched_yield() is about scheduling, not synchronization. There is a relationship between the two, in the sense that synchronization events affect which threads are eligible to run, but that goes in the wrong direction for your needs.
You are probably looking at the problem from the wrong end. You need each of your threads to wait to execute until it is their turn, and for them to do that, they need some mechanism to convey information among them about whose turn it is. There are several alternatives for that, but if "only sched_yield()" is taken to mean that no library functions other than sched_yield() may be used for that purpose then a shared variable seems the expected choice. The starting point should therefore be how you could use a shared variable to make the threads take turns in the appropriate order.
Flawed starting point
Here is a naive approach that might spring immediately to mind:
/* FLAWED */
void *b(void *data){
char *whose_turn = data;
while (*whose_turn != 'b') {
// nothing?
}
printf("b");
*whose_turn = 'c';
return NULL;
}
That is, the thread executes a busy loop, monitoring the shared variable to await it taking a value signifying that the thread should proceed. When it has done its work, the thread modifies the variable to indicate that the next thread may proceed. But there are several problems with that, among them:
Supposing that at least one other thread writes to the object designated by *whose_turn, the program contains a data race, and therefore its behavior is undefined. As a practical matter, a thread that once entered the body of the loop in that function might loop infinitely, notwithstanding any action by other threads.
Without making additional assumptions about thread scheduling, such as a fairness policy, it is not safe to assume that the thread that will make the needed modification to the shared variable will be scheduled in bounded time.
While a thread is executing the loop in that function, it prevents any other thread from executing on the same core, yet it cannot make progress until some other thread takes action. To the extent that we can assume preemptive thread scheduling, this is an efficiency issue and contributory to (2). However, if we assume neither preemptive thread scheduling nor the threads being scheduled each on a separate core then this is an invitation to deadlock.
Possible improvements
The conventional and most appropriate way to do that in a pthreads program is with the use of a mutex and condition variable. Properly implemented, that resolves the data race (issue 1) and it ensures that other threads get a chance to run (issue 3). If that leaves no other threads eligible to run besides the one that will modify the shared variable then it also addresses issue 2, to the extent that the scheduler is assumed to grant any CPU to the process at all.
But you are forbidden to do that, so what else is available? Well, you could make the shared variable _Atomic. That would resolve the data race, and in practice it would likely be sufficient for the wanted thread ordering. In principle, however, it does not resolve issue 3, and as a practical matter, it does not use sched_yield(). Also, all that busy-looping is wasteful.
But wait! You have a clue in that you are told to use sched_yield(). What could that do for you? Suppose you insert a call to sched_yield() in the body of the busy loop:
/* (A bit) better */
void* b(void *data){
char *whose_turn = data;
while (*whose_turn != 'b') {
sched_yield();
}
printf("b");
*whose_turn = 'c';
return NULL;
}
That resolves issues 2 and 3, explicitly affording the possibility for other threads to run and putting the calling thread at the tail of the scheduler's thread list. Formally, it does not resolve issue 1 because sched_yield() has no documented effect on memory ordering, but in practice, I don't think it can be implemented without a (full) memory barrier. If you are allowed to use atomic objects then combining an atomic shared variable with sched_yield() would tick all three boxes. Even then, however, there would still be a bunch of wasteful busy-looping.
Final remarks
Note well that pthread_join() is a synchronization function, thus, as I understand the task, you may not use it to ensure that the main thread's output is printed last.
Note also that I have not spoken to how the main() function would need to be modified to support the approach I have suggested. Changes would be needed for that, and they are left as an exercise.

Is a mutex lock used inside a shared function or outside of it

Assume sharedFnc is a function that is used between multiple threads:
void sharedFnc(){
// do some thread safe work here
}
Which one is the proper way of using a Mutex here?
A)
void sharedFnc(){
// do some thread safe work here
}
int main(){
...
pthread_mutex_lock(&lock);
sharedFnc();
pthread_mutex_unlock(&lock);
...
}
Or B)
void sharedFnc(){
pthread_mutex_lock(&lock);
// do some thread safe work here
pthread_mutex_unlock(&lock);
}
int main(){
...
sharedFnc();
...
}
Let's consider two extremes:
In the first extreme, you can't even tell what lock you need to acquire until you're inside the function. Maybe the function locates an object and operates on it and the lock is per-object. So how can the caller know what lock to hold?
And maybe the code needs to do some work while holding the lock and some work while not holding the lock. Maybe it needs to release the lock while waiting for something.
In this extreme, the lock must be acquired and released inside the function.
In the opposite extreme, the function might not even have any idea it's used by multiple threads. It may have no idea what lock its data is associated with. Maybe it's called on different data at different times and that data is protected by different locks.
Maybe its caller needs to call several different functions while holding the same lock. Maybe this function reports some information on which the thread will decide to call some other function and it's critical that state not be changed by another thread between those two functions.
In this extreme, the caller must acquire and release the lock.
Between these two extremes, it's a judgment call based on which extreme the situation is closer to. Also, those aren't the only two options available. There are "in-between" options as well.
There's something to be said for this pattern:
// Only call this with `lock` locked.
//
static sometype foofunc_locked(...) {
...
}
sometype foofunc(...) {
pthread_mutex_lock(&lock);
sometype rVal = foofunc_locked(...);
pthread_mutex_unlock(&lock);
return rVal;
}
This separates the responsibility for locking and unlocking the mutex from whatever other responsibilities are embodied by foofunc_locked(...).
One reason you would want to do that is, it's very easy to see whether every possible invocation of foofunc() unlocks the lock before it returns. That might not be the case if the locking and unlocking was mingled with loops, and switch statements and nested if statements and returns from the middle, etc.
If the lock is inside the function, you better make damn sure there's no recursion involved, especially no indirect recursion.
Another problem with the lock being inside the function is loops, where you have two big problems:
Performance. Every cycle you're releasing and reacquiring your locks. That can be expensive, especially in OS's like Linux which don't have light locks like critical sections.
Lock semantics. If there's work to be done inside the loop, but outside your function, you can't acquire the lock once per cycle, because it will dead-lock your function. So you have to piece-meal your loop cycle even more, calling your function (acquire-release), then manually acquire the lock, do the extra work, and manually release it before the cycle ends. And you have absolutely no guarantee of what happens between your function releasing it and you acquiring it.

Recursive Multithreading in C

I'm creating a function that searches through a directory, prints out files, and when it runs into a folder, a new thread is created to run through that folder and do the same thing.
It makes sense to me to use recursion then as follows:
pthread_t tid[500];
int i = 0;
void *search(void *dir)
{
struct dirent *dp;
DIR *df;
df = opendir(dir)
char curFile[100];
while ((dp = readdir(df)) != NULL)
{
sprintf(curFile, "%s/%s",dir,dp->d_name);
if(isADirectory(curFile))
{
pthread_create(&tid[i], NULL, &search, &curFile);
i++;
}
else
{
printf("%s\n", curFile);
}
}
pthread_join(&tid[i])
return 0;
}
When I do this, however, the function ends up trying to access directories that don't actually exist. Initially I had pthread_join() directly after pthread_create(), which worked, but I don't know if you can count that as multithreading since each thread waits for its worker thread to exit before doing anything.
Is the recursive aspect of this problem even possible, or is it necessary for a new thread to call a different function other than itself?
I haven't dealt with multithreading in a while but if memory serves threads share resources. Which means (in your example) every new thread you make accesses the same variable "i". Now if those threads only read variable "i" there would be no problem whatsoever (every thread keeps reading ... i = 2 wohoo :D).
But issues arise when threads share resources that are being read and written on.
i = 2
i++
// there are many threads running this code
// and "i" is shared among them, are you sure i = 3?
Read, write on shared resources problem is solved with thread synchronization. I recommend reading/googling upon it since it's a pretty unique topic to be solved in one question.
P.S. I pointed out variable "i" in your code but there may be more such resources since your code doesn't display any attempt at thread synchronization.
Consider your while loop. Inside it you have:
sprintf(curFile, "%s/%s",dir,dp->d_name);
and
pthread_create(&tid[i], NULL, &search, &curFile);
So, you mutate the contents of curFile inside the loop, and you also create a thread which you are trying to pass the current contents of curFile. This is a spectacular race hazard - there is no guarantee that the new thread will see the intended contents of curFile, since it may have changed in the meantime. You need to duplicate the string and pass the new thread a copy which won't be mutated by the calling thread. The thread is therefore also going to have be responsible for deallocating the copy, which means either that the search method do exactly that or that you have a second method.
You have another race condition in using i and tid in all threads. As I have suggested in the comment on your question, I think these variables should be method local.
In general I suggest that you read on thread safety and learn about data race hazards before you attempt to use threads. It is usually best to avoid the use of threads unless you really need the extra performance.

Is there a mechanism to try to lock one of several mutexes?

How can a program try to lock multiple mutexes at the same time, and know which mutex it ended up unlocking. Essentially, I am looking for is an equivalent of select() but for mutexes. Does such a thing exist? If not, are there any libraries which implement it?
I'm (almost) certain that this kind of functionality ought to be implemented with a monitor (and condition variables using signal/wait/broadcast), but I think you can solve your problem with a single additional semaphore.
Assuming all your mutex objects begin in the "locked" state, create a semaphore with initial value 0. Whenever a mutex object is unlocked, increment (V) the semaphore. Then, implement select() like this:
// grab a mutex if possilbe
Mutex select(Semaphore s, Mutex[] m) {
P(s); // wait for the semaphore
for (Mutex toTry : m) {
boolean result = try_unlock(m);
if (result) return m;
}
}
Essentially, the semaphore keeps track of the number of available locks, so whenever P(s) stops blocking, there must be at least one available mutex (assuming you correctly increment the semaphore when a mutex becomes available!)
I haven't attempted to prove this code correct nor have I tested it... but I don't see any reason why it shouldn't work.
Once again, you likely want to use a monitor!

Solve race condition during semaphore initialization

I have an array that must be shared between threads, protected by a semaphore. I've put the initialization code inside a function that can be called multiple times, a "constructor", as follows:
#include <stdbool.h> //for bool
#include <semaphore.h>
sem_t global_mutex;
char global_array[N]; // Protected with global_mutex
struct my_struct *new_my_struct(){
static bool is_init = false; // This will be initialized only once, right?
if (!is_init){ // 1
sem_init(&global_mutex, 0, 1); // 2
sem_wait(&global_mutex); // 3
if (!is_init){ // 4
is_init = true; // 5
... initialize global_array ... // 6
}
sem_post(&global_mutex); // 7
}
... proceed on the create and return a my_struct pointer ...
}
In an ideal world, a thread would run from 1 to 7, initialize the array and exit the critical region. Even if another thread had stopped in 2, the test in 4 would be false and the array wouldn't be overwritten. I haven't thinked much of what would happen if a thread stuck in 1 and reinitialized the semaphore, but I believe it isn't of much concern as long as is_init be set to true by the first thread to run!
Now, there is a race condition if a thread stops in 4, and another one runs from the beggining to completion, initializing and populating the global_array. When the thread stopped at 4 runs, it will reinitialize the array and delete the state stored by the first thread.
I would like to know if there is any way to not suffer that race condition (maybe a clever use of static?) or if I should separate the initialization code from the constructor and use it in the main thread, when there's no concurrency.
This code is in use and I haven't suffered from a race condition yet. However, as I know its possible, I'd wish to correct it.
If the real use of the semaphore is really as a mutex, use just that pthread_mutex_t. These can be initialized statically, so your problem would disappear.
The syntax would be
pthread_mutex_t global_mutex = PTHREAD_MUTEX_INITIALIZER;
If you really need a dynamic initialization of a global object, have a look into pthread_once. This is the type (pthread_once_t) and function that is foreseen by POSIX for such a task.
There are a few ways to do thread-safe lazy initialization, but this isn't one of them.
pthread_once is one way, and using a global mutex that's actually a mutex (initialized statically) to synchronize the initialization is another. Implementations might guarantee thread-safe initialization of static local variables, but don't have to (at least, they didn't prior to C11 and I haven't checked that).
However you synchronize the actual initialization, though, double-checked locking is not guaranteed to work in C or in Posix. It's a data race to check a flag in one thread, that was set in another thread, without some kind of synchronization in both threads. The implementation of pthread_once should do its best to be fast in the common case that the initialization has already been done. If your implementation guarantees thread-safe intialization of function-scoped static variables, then that will also do its best. Unless you really know what you're doing (e.g. you're implementing pthread_once yourself for some new system), use one of those in preference to rolling your own attempt to avoid costly locking in the common case.

Categories

Resources