Recreate sem_wait() with mutex? - c

Is there any way so I can have up to 10 threads in the same mutex?
Something like sem_wait() with value 10.
Edit:
Found this:
it is an implementation of semaphores, using mutexes and condition variables.
typedef struct {
int value, wakeups;
Mutex *mutex;
Cond *cond;
} Semaphore;
// SEMAPHORE
Semaphore *make_semaphore (int value)
{
Semaphore *semaphore = check_malloc (sizeof(Semaphore));
semaphore->value = value;
semaphore->wakeups = 0;
semaphore->mutex = make_mutex ();
semaphore->cond = make_cond ();
return semaphore;
}
void sem_wait (Semaphore *semaphore)
{
mutex_lock (semaphore->mutex);
semaphore->value--;
if (semaphore->value < 0) {
do {
cond_wait (semaphore->cond, semaphore->mutex);
} while (semaphore->wakeups < 1);
semaphore->wakeups--;
}
mutex_unlock (semaphore->mutex);
}
void sem_signal (Semaphore *semaphore)
{
mutex_lock (semaphore->mutex);
semaphore->value++;
if (semaphore->value <= 0) {
semaphore->wakeups++;
cond_signal (semaphore->cond);
}
mutex_unlock (semaphore->mutex);
}

See If that Helps
From Book Begining Linux programming a counting semaphore that
takes a wider range of values. Normally,semaphores are used to
protect a piece of code so that only one thread of execution can run
it at any one time. For this job a binary semaphore is needed.
Occasionally, you want to permit a limited number of threads to
execute a given piece of code; for this you would use a counting
semaphore

No, A mutex is a simple lock, having two states: locked and unlocked. When it is created, a mutex is unlocked. A mutex is a mutual exclusion lock. Only one thread can hold the lock.
Although you can implement to allow ten threads to enter in a section using mutex and if and a global variable. (the way count semaphores are implemented)
Read here: Implementing a Counting Semaphore
One Available implementation: C code , Make your own semaphore

Related

can pthread_cond_signal make more than one thread to wake up?

I'm studying on condition variables of Pthread. When I'm reading the explanation of pthread_cond_signal, I see the following.
The pthread_cond_signal() function shall unblock at least one of
the
threads that are blocked on the specified condition variable cond (if
any threads are blocked on cond).
Till now I knew pthread_cond_signal() would make only one thread to wake up at a time. But, the quoted explanation says at least one. What does it mean? Can it make more than one thread wake up? If yes, why is there pthread_cond_broadcast()?
En passant, I wish the following code taken from UNIX Systems Programming book of Robbins is also related to my question. Is there any reason the author's pthread_cond_broadcast() usage instead of pthread_cond_signal() in waitbarrier function? As a minor point, why is !berror checking needed too as a part of the predicate? When I try both of them by changing, I cannot see any difference.
/*
The program implements a thread-safe barrier by using condition variables. The limit
variable specifies how many threads must arrive at the barrier (execute the
waitbarrier) before the threads are released from the barrier.
The count variable specifies how many threads are currently waiting at the barrier.
Both variables are declared with the static attribute to force access through
initbarrier and waitbarrier. If successful, the initbarrier and waitbarrier
functions return 0. If unsuccessful, these functions return a nonzero error code.
*/
#include <errno.h>
#include <pthread.h>
#include <stdio.h>
static pthread_cond_t bcond = PTHREAD_COND_INITIALIZER;
static pthread_mutex_t bmutex = PTHREAD_MUTEX_INITIALIZER;
static int count = 0;
static int limit = 0;
int initbarrier(int n) { /* initialize the barrier to be size n */
int error;
if (error = pthread_mutex_lock(&bmutex)) /* couldn't lock, give up */
return error;
if (limit != 0) { /* barrier can only be initialized once */
pthread_mutex_unlock(&bmutex);
return EINVAL;
}
limit = n;
return pthread_mutex_unlock(&bmutex);
}
int waitbarrier(void) { /* wait at the barrier until all n threads arrive */
int berror = 0;
int error;
if (error = pthread_mutex_lock(&bmutex)) /* couldn't lock, give up */
return error;
if (limit <= 0) { /* make sure barrier initialized */
pthread_mutex_unlock(&bmutex);
return EINVAL;
}
count++;
while ((count < limit) && !berror)
berror = pthread_cond_wait(&bcond, &bmutex);
if (!berror) {
fprintf(stderr,"soner %d\n",
(int)pthread_self());
berror = pthread_cond_broadcast(&bcond); /* wake up everyone */
}
error = pthread_mutex_unlock(&bmutex);
if (berror)
return berror;
return error;
}
/* ARGSUSED */
static void *printthread(void *arg) {
fprintf(stderr,"This is the first print of thread %d\n",
(int)pthread_self());
waitbarrier();
fprintf(stderr,"This is the second print of thread %d\n",
(int)pthread_self());
return NULL;
}
int main(void) {
pthread_t t0,t1,t2;
if (initbarrier(3)) {
fprintf(stderr,"Error initilizing barrier\n");
return 1;
}
if (pthread_create(&t0,NULL,printthread,NULL))
fprintf(stderr,"Error creating thread 0.\n");
if (pthread_create(&t1,NULL,printthread,NULL))
fprintf(stderr,"Error creating thread 1.\n");
if (pthread_create(&t2,NULL,printthread,NULL))
fprintf(stderr,"Error creating thread 2.\n");
if (pthread_join(t0,NULL))
fprintf(stderr,"Error joining thread 0.\n");
if (pthread_join(t1,NULL))
fprintf(stderr,"Error joining thread 1.\n");
if (pthread_join(t2,NULL))
fprintf(stderr,"Error joining thread 2.\n");
fprintf(stderr,"All threads complete.\n");
return 0;
}
Due to spurious wake-ups pthread_cond_signal could wake up more than one thread.
Look for word "spurious" in pthread_cond_wait.c from glibc.
In waitbarrier it must wake up all threads when they all have arrived to that point, hence it uses pthread_cond_broadcast.
Can [pthread_cond_signal()] make more than one thread wake up?
That's not guaranteed. On some operating system, on some hardware platform, under some circumstances it could wake more than one thread. It is allowed to wake more than one thread because that gives the implementer more freedom to make it work in the most efficient way possible for any given hardware and OS.
It must wake at least one waiting thread, because otherwise, what would be the point of calling it?
But, if your applicaton needs a signal that is guaranteed to wake all of the waiting threads, then that is what pthread_cond_broadcast() is for.
Making efficient use of a multi-processor system is hard. https://www.e-reading.club/bookreader.php/134637/Herlihy,Shavit-_The_art_of_multiprocessor_programming.pdf
Most programming language and library standards allow similar freedoms in the behavior of multi-threaded programs, for the same reason: To allow programs to achieve high performance on a variety of different platforms.

Reader/Writer implementation in C

I'm currently learning about concurrency at my University. In this context I have to implement the reader/writer problem in C, and I think I'm on the right track.
My thought on the problem is, that we need two locks rd_lock and wr_lock. When a writer thread wants to change our global variable, it tries to grab both locks, writes to the global and unlocks. When a reader wants to read the global, it checks if wr_lock is currently locked, and then reads the value, however one of the reader threads should grab the rd_lock, but the other readers should not care if rd_lock is locked.
I am not allowed to use the implementation already in the pthread library.
typedef struct counter_st {
int value;
} counter_t;
counter_t * counter;
pthread_t * threads;
int readers_tnum;
int writers_tnum;
pthread_mutex_t rd_lock;
pthread_mutex_t wr_lock;
void * reader_thread() {
while(true) {
pthread_mutex_lock(&rd_lock);
pthread_mutex_trylock(&wr_lock);
int value = counter->value;
printf("%d\n", value);
pthread_mutex_unlock(&rd_lock);
}
}
void * writer_thread() {
while(true) {
pthread_mutex_lock(&wr_lock);
pthread_mutex_lock(&rd_lock);
// TODO: increment value of counter->value here.
counter->value += 1;
pthread_mutex_unlock(&rd_lock);
pthread_mutex_unlock(&wr_lock);
}
}
int main(int argc, char **args) {
readers_tnum = atoi(args[1]);
writers_tnum = atoi(args[2]);
pthread_mutex_init(&rd_lock, 0);
pthread_mutex_init(&wr_lock, 0);
// Initialize our global variable
counter = malloc(sizeof(counter_t));
counter->value = 0;
pthread_t * threads = malloc((readers_tnum + writers_tnum) * sizeof(pthread_t));
int started_threads = 0;
// Spawn reader threads
for(int i = 0; i < readers_tnum; i++) {
int code = pthread_create(&threads[started_threads], NULL, reader_thread, NULL);
if (code != 0) {
printf("Could not spawn a thread.");
exit(-1);
} else {
started_threads++;
}
}
// Spawn writer threads
for(int i = 0; i < writers_tnum; i++) {
int code = pthread_create(&threads[started_threads], NULL, writer_thread, NULL);
if (code != 0) {
printf("Could not spawn a thread.");
exit(-1);
} else {
started_threads++;
}
}
}
Currently it just prints a lot of zeroes, when run with 1 reader and 1 writer, which means, that it never actually executes the code in the writer thread. I know that this is not going to work as intended with multiple readers, however I don't understand what is wrong, when running it with one of each.
Don't think of the locks as "reader lock" and "writer lock".
Because you need to allow multiple concurrent readers, readers cannot hold a mutex. (If they do, they are serialized; only one can hold a mutex at the same time.) They can take one for a short duration (before they begin the access, and after they end the access), to update state, but that's it.
Split the timeline for having a rwlock into three parts: "grab rwlock", "do work", "release rwlock".
For example, you could use one mutex, one condition variable, and a counter. The counter holds the number of active readers. The condition variable is signaled on by the last reader, and by writers just before they release the mutex, to wake up a waiting writer. The mutex protects both, and is held by writers for the whole duration of their write operation.
So, in pseudocode, you might have
Function rwlock_rdlock:
Take mutex
Increment counter
Release mutex
End Function
Function rwlock_rdunlock:
Take mutex
Decrement counter
If counter == 0, Then:
Signal_on cond
End If
Release mutex
End Function
Function rwlock_wrlock:
Take mutex
While counter > 0:
Wait_on cond
End Function
Function rwlock_unlock:
Signal_on cond
Release mutex
End Function
Remember that whenever you wait on a condition variable, the mutex is atomically released for the duration of the wait, and automatically grabbed when the thread wakes up. So, for waiting on a condition variable, a thread will have the mutex both before and after the wait, but not during the wait itself.
Now, the above approach is not the only one you might implement.
In particular, you might note that in the above scheme, there is a different "unlock" operation you must use, depending on whether you took a read or a write lock on the rwlock. In POSIX pthread_rwlock_t implementation, there is just one pthread_rwlock_unlock().
Whatever scheme you design, it is important to examine it whether it works right in all situations: a lone read-locker, a lone-write-locker, several read-lockers, several-write-lockers, a lone write-locker and one read-locker, a lone write-locker and several read-lockers, several write-lockers and a lone read-locker, and several read- and write-lockers.
For example, let's consider the case when there are several active readers, and a writer wants to write-lock the rwlock.
The writer grabs the mutex. It then notices that the counter is nonzero, so it starts waiting on the condition variable. When the last reader -- note how the order of the readers exiting does not matter, since a simple counter is used! -- unlocks its readlock on the rwlock, it signals on the condition variable, which wakes up the writer. The writer then grabs the mutex, sees the counter is zero, and proceeds to do its work. During that time, the mutex is held by the writer, so all new readers will block, until the writer releases the mutex. Because the writer will also signal on the condition variable when it releases the mutex, it is a race between other waiting writers and waiting readers, who gets to go next.

unexpected behavior when using conditional variables (c, gcc)

I am trying to learn how to use conditional variables properly in C.
As an exercise for myself I am trying to make a small program with 2 threads that print "Ping" followed by "Pong" in an endless loop.
I have written a small program:
pthread_mutex_t lock = PTHREAD_MUTEX_INITIALIZER;
pthread_cond_t cond = PTHREAD_COND_INITIALIZER;
void* T1(){
printf("thread 1 started\n");
while(1)
{
pthread_mutex_lock(&lock);
sleep(0.5);
printf("ping\n");
pthread_cond_signal(&cond);
pthread_mutex_unlock(&lock);
pthread_cond_wait(&cond,&lock);
}
}
void* T2(){
printf("thread 2 started\n");
while(1)
{
pthread_cond_wait(&cond,&lock);
pthread_mutex_lock(&lock);
sleep(0.5);
printf("pong\n");
pthread_cond_signal(&cond);
pthread_mutex_unlock(&lock);
}
}
int main(void)
{
int i = 1;
pthread_t t1;
pthread_t t2;
printf("main\n");
pthread_create(&t1,NULL,&T1,NULL);
pthread_create(&t2,NULL,&T2,NULL);
while(1){
sleep(1);
i++;
}
return EXIT_SUCCESS;
}
And when running this program the output I get is:
main
thread 1 started
thread 2 started
ping
Any idea what is the reason the program does not execute as expected?
Thanks in advance.
sleep takes an integer, not a floating point. Not sure what sleep(0) does on your system, but it might be one of your problems.
You need to hold the mutex while calling pthread_cond_wait.
Naked condition variables (that is condition variables that don't indicate that there is a condition to read somewhere else) are almost always wrong. A condition variable indicates that something we are waiting for might be ready to be consumed, they are not for signalling (not because it's illegal, but because it's pretty hard to get them right for pure signalling). So in general a condition will look like this:
/* consumer here */
pthread_mutex_lock(&something_mutex);
while (something == 0) {
pthread_cond_wait(&something_cond, &something_mutex);
}
consume(something);
pthread_mutex_unlock(&something_mutex);
/* ... */
/* producer here. */
pthread_mutex_lock(&something_mutex);
something = 4711;
pthread_cond_signal(&something_cond, &something_mutex);
pthread_mutex_unlock(&something_mutex);
It's a bad idea to sleep while holding locks.
T1 and T2 are not valid functions to use as functions to pthread_create they are supposed to take arguments. Do it right.
You are racing yourself in each thread between cond_signal and cond_wait, so it's not implausible that each thread might just signal itself all the time. (correctly holding the mutex in the calls to pthread_cond_wait may help here, or it may not, that's why I said that getting naked condition variables right is hard, because it is).
First of all you should never use sleep() to synchronize threads (use nanosleep() if you need to reduce output speed). You may need (it's a common use) a shared variable ready to let each thread know that he can print the message. Before you make a pthread_cond_wait() you must acquire the lock because the pthread_cond_wait() function shall block on a condition variable. It shall be called with mutex locked by the calling thread or undefined behavior results.
Steps are:
Acquire the lock
Use wait in a while with a shared variable in guard[*]
Do stuffs
Change the value of shared variable for synchronize (if you've one) and signal/broadcast that you finished to work
Release the lock
Steps 4 and 5 can be reversed.
[*]You use pthread_cond_wait() to release the mutex and block the thread on the condition variable and when using condition variables there is always a Boolean predicate involving shared variables associated with each condition wait that is true if the thread should proceed because spurious wakeups may occur. watch more here
pthread_mutex_t lock = PTHREAD_MUTEX_INITIALIZER;
pthread_cond_t cond = PTHREAD_COND_INITIALIZER;
int ready = 0;
void* T1(){
printf("thread 1 started\n");
while(1)
{
pthread_mutex_lock(&lock);
while(ready == 1){
pthread_cond_wait(&cond,&lock);
}
printf("ping\n");
ready = 1;
pthread_cond_signal(&cond);
pthread_mutex_unlock(&lock);
}
}
void* T2(){
printf("thread 2 started\n");
while(1)
{
pthread_mutex_lock(&lock);
while(ready == 0){
pthread_cond_wait(&cond,&lock);
}
printf("pong\n");
ready = 0;
pthread_cond_signal(&cond);
pthread_mutex_unlock(&lock);
}
}
int main(void)
{
int i = 1;
pthread_t t1;
pthread_t t2;
printf("main\n");
pthread_create(&t1,NULL,&T1,NULL);
pthread_create(&t2,NULL,&T2,NULL);
pthread_join(t1,NULL);
pthread_join(t2,NULL);
return EXIT_SUCCESS;
}
You should also use pthread_join() in main instead of a while(1)

Scheduling two threads in linux two print a pattern of numbers

I want to print numbers using two threads T1 and T2. T1 should print numbers like 1,2,3,4,5 and then T2 should print 6,7,8,9,10 and again T1 should start and then T2 should follow. It should print from 1....100. I have two questions.
How can I complete the task using threads and one global variable?
How can I schedule threads in desired order in linux?
How can I schedule threads in desired order in linux?
You need to use locking primitives, such as mutexes or condition variables to influence scheduling order of threads. Or you have to make your threads work independently on the order.
How can I complete the task using threads and one global variable?
If you are allowed to use only one variable, then you can't use mutex (it will be the second variable). So the first thing you must do is to declare your variable atomic. Otherwise compiler may optimize your code in such a way that one thread will not see changes made by other thread. And for such simple code that you want, it will do so by caching variable on register. Use std::atomic_int. You may find an advice to use volatile int, but nowdays std::atomic_int is a more direct approach to specify what you want.
You can't use mutexes, so you can't make your threads wait. They will be constantly running and wasting CPU. But that's seems OK for the task. So you will need to write a spinlock. Threads will wait in a loop constantly checking the value. If value % 10 < 5 then first thread breaks the loop and does incrementing, otherwise second thread does the job.
Because the question looks like a homework I don't show here any code samples, you need to write them yourself.
1: the easiest way is to use mutexes.
this is a basic implementation with a unfair/undefined sheduling
int counter=1;
pthread_mutex_t mutex; //needs to be initialised
void incrementGlobal() {
for(int i=0;i<5;i++){
counter++;
printf("%i\n",counter);
}
}
T1/T2:
pthread_mutex_lock(&mutex);
incrementGlobal();
pthread_mutex_unlock(&mutex);
2: the correct order can be archieved with conditional-variables:
(but this needs more global-variables)
global:
int num_thread=2;
int current_thread_id=0;
pthread_cond_t cond; //needs to be initialised
T1/T2:
int local_thread_id; // the ID of the thread
while(true) {
phread_mutex_lock(&mutex);
while (current_thread_id != local_thread_id) {
pthread_cond_wait(&cond, &mutex);
}
incrementGlobal();
current_thread_id = (current_thread_id+1) % num_threads;
pthread_cond_broadcast(&cond);
pthread_mutex_unlock(&mutex);
}
How can I complete the task using threads and one global variable?
If you are using Linux, you can use POSIX library for threading in C programming Language. The library is <pthread.h>. However, as an alternative, a quite portable and relatively non-intrusive library, well supported on gcc and g++ and (with an old version) on MSVC is openMP.
It is not standard C and C++, but OpenMP itself is a standard.
How can I schedule threads in desired order in linux?
To achieve your desired operation of printing, you need to have a global variable which can be accessed by two threads of yours. Both threads take turns to access the global variable variable and perform operation (increment and print). However, to achieve desired order, you need to have a mutex. A mutex is a mutual exclusion semaphore, a special variant of a semaphore that only allows one locker at a time. It can be used when you have an instance of resource (global variable in your case) and that resource is being shared by two threads. A thread after locking that mutex can have exclusive access to an instance of resource and after completing it's operation, a thread should release the mutex for other threads.
You can start with threads and mutex in <pthread.h> from here.
The one of the possible solution to your problem could be this program is given below. However, i would suggest you to find try it yourself and then look at my solution.
#include <stdio.h>
#include <pthread.h>
#include <unistd.h>
pthread_mutex_t lock;
int variable=0;
#define ONE_TIME_INC 5
#define MAX 100
void *thread1(void *arg)
{
while (1) {
pthread_mutex_lock(&lock);
printf("Thread1: \n");
int i;
for (i=0; i<ONE_TIME_INC; i++) {
if (variable >= MAX)
goto RETURN;
printf("\t\t%d\n", ++variable);
}
printf("Thread1: Sleeping\n");
pthread_mutex_unlock(&lock);
usleep(1000);
}
RETURN:
pthread_mutex_unlock(&lock);
return NULL;
}
void *thread2(void *arg)
{
while (1) {
pthread_mutex_lock(&lock);
printf("Thread2: \n");
int i;
for (i=0; i<ONE_TIME_INC; i++) {
if (variable >= MAX)
goto RETURN;
printf("%d\n", ++variable);
}
printf("Thread2: Sleeping\n");
pthread_mutex_unlock(&lock);
usleep(1000);
}
RETURN:
pthread_mutex_unlock(&lock);
return NULL;
}
int main()
{
if (pthread_mutex_init(&lock, NULL) != 0) {
printf("\n mutex init failed\n");
return 1;
}
pthread_t pthread1, pthread2;
if (pthread_create(&pthread1, NULL, thread1, NULL))
return -1;
if (pthread_create(&pthread2, NULL, thread2, NULL))
return -1;
pthread_join(pthread1, NULL);
pthread_join(pthread2, NULL);
pthread_mutex_destroy(&lock);
return 0;
}

Using different mutex locks in threads

I have written the following program to implement two threads in POSIX. There is a global shared variable sum, which is being accessed by two different threads simultaneously. I have used mutex lock and unlock inside each thread while accessing the shared variable. I have a question. Here I have used the samed mutex lock (pthread_mutex_lock(&mutex))inside the two threads. What will Happen if I use two different mutex lock and unlock inside the threads (such as pthread_mutex_lock(&mutex)) in thread1 and pthread_mutex_lock(&mutex1) in thread2. I have commented out the line of confusion in the code.
My sample code fragment:
#include<stdio.h>
#include<pthread.h>
#include<stdlib.h>
pthread_mutex_t mutex=PTHREAD_MUTEX_INITIALIZER;
pthread_mutex_t mutex1=PTHREAD_MUTEX_INITIALIZER;
int sum=0;
void * threadFunc1(void * arg)
{
int i;
for(i=1;i<100;i++)
{
printf("%s\n",(char*)arg);
pthread_mutex_lock(&mutex)
sum++;
pthread_mutex_unlock(&mutex)
sleep(1);
}
}
void * threadFunc2(void * arg)
{
int i;
for(i=1;i<100;i++)
{
printf("%s\n",(char*)arg);
pthread_mutex_lock(&mutex) //what will happen if I use mutex1 here
sum--;
pthread_mutex_lock(&mutex) //what will happen if I use mutex1 here
sleep(1);
}
}
int main(void)
{
pthread_t thread1;
pthread_t thread2;
char * message1 = "i am thread 1";
char * message2 = "i am thread 2";
pthread_create(&thread1,NULL,threadFunc1,(void*)message1 );
pthread_create(&thread2,NULL,threadFunc2,(void*)message2 );
pthread_join(thread1,NULL);
pthread_join(thread2,NULL);
return 0;
}
What is the basic difference between using the same mutex locks and different mutex lock in accessing a shared variable?
The purpose of the mutex is to prevent one thread from accessing the shared variable while another thread is, or might be, modifying it. The semantics of a mutex are that two threads cannot lock the same mutex at the same time. If you use two different mutexes, you don't prevent one thread from accessing the shared variable while another thread is modifying it, since threads can hold different mutexes at the same time. So the code will no longer be guaranteed to work.

Resources