#include <stdio.h>
#include <pthread.h>
#include <semaphore>
sem_t empty, full, mutex;
#define N 10
void* producerThread(void*) {
int i = 0;
while (1) {
sem_wait(&empty);
sem_wait(&mutex);
buff[i] = rand();
i = ++i % N;
sem_post(&mutex);
sem_post(&full);
}
}
void* consumerThread(void*) {
int i = 0;
while (1) {
sem_wait(&full);
sem_wait(&mutex);
printf("%d\n", buff[i]);
i = ++i % N;
sem_post(&mutex);
sem_post(&empty);
}
}
void main() {
pthread_t producer, consumer;
sem_init(&empty, N);
sem_init(&full, 0);
sem_init(&mutex, 1);
pthread_create(&producer, NULL, &producerThread, NULL);
pthread_create(&consumer, NULL, &consumerThread, NULL);
pthread_join(producer, NULL);
pthread_join(consumer, NULL);
sem_destroy(&empty);
sem_destroy(&full);
sem_destroy(&mutex);
}
I have the following question, this code is well know Producer-Consumer problem when learning about multi-threading, but i do not understand why do we need an additional semaphore (mutex) in this case? Can't we do everything with semaphores full & empty and there will be no problems whatsoever where producer produces on the spot consumer didnt already consume or vice-versa? Afaik with mutex we are adding additional bagage on the code and this is not necessary. Can someone point to me why we need 3 semaphores instead of 2?
I've tried running this code on my computer and everything works the same with and without additional semaphore, so I do not understand why did author choose 3 semaphores in this instance?
The empty and full semaphores take values in the range [0..N), allowing the producer to run ahead of the consumer by up to N elements.
The mutex semaphore only bounces between values 0 and 1, and enforces a critical section ensuring that only one thread is touching any part of the buffer memory at a time. However, the separate computation of i on each thread and the empty/full handshake ensures there can be no data race on individual elements of buff, so that critical section is probably overkill.
You don't show the definition of buff. For a sufficiently narrow datatype (like individual bytes), some architectures may exhibit word tearing on concurrent writes to adjacent elements. However in your example only one thread is performing writes, so even in the presence of word-tearing the concurrent adjacent reads are unlikely to observe a problem.
Related
I came across this problem as I am learning more about operating systems. In my code, I've tried making the reader having priority and it worked, so next I modified it a bit to make the writer have the priority. When I ran the code, the output was exactly the same and it seemed like the writer did not have the priority. Here is the code with comments. I am not sure what I've done wrong, since I modified a lot of the code but the output remains the same if I did not change it at all.
#include <pthread.h>
#include <semaphore.h>
#include <stdio.h>
/*
This program provides a possible solution for first readers writers problem using mutex and semaphore.
I have used 10 readers and 5 producers to demonstrate the solution. You can always play with these values.
*/
// Semaphore initialization for writer and reader
sem_t wrt;
sem_t rd;
// Mutex 1 blocks other readers, mutex 2 blocks other writers
pthread_mutex_t mutex1;
pthread_mutex_t mutex2;
// Value the writer is changing, we are simply multiplying this value by 2
int cnt = 2;
int numreader = 0;
int numwriter = 0;
void *writer(void *wno)
{
pthread_mutex_lock(&mutex2);
numwriter++;
if(numwriter == 1){
sem_wait(&rd);
}
pthread_mutex_unlock(&mutex2);
sem_wait(&wrt);
// Writing Section
cnt = cnt*2;
printf("Writer %d modified cnt to %d\n",(*((int *)wno)),cnt);
sem_post(&wrt);
pthread_mutex_lock(&mutex2);
numwriter--;
if(numwriter == 0){
sem_post(&rd);
}
pthread_mutex_unlock(&mutex2);
}
void *reader(void *rno)
{
sem_wait(&rd);
pthread_mutex_lock(&mutex1);
numreader++;
if(numreader == 1){
sem_wait(&wrt);
}
pthread_mutex_unlock(&mutex1);
sem_post(&rd);
// Reading Section
printf("Reader %d: read cnt as %d\n",*((int *)rno),cnt);
pthread_mutex_lock(&mutex1);
numreader--;
if(numreader == 0){
sem_post(&wrt);
}
pthread_mutex_unlock(&mutex1);
}
int main()
{
pthread_t read[10],write[5];
pthread_mutex_init(&mutex1, NULL);
pthread_mutex_init(&mutex2, NULL);
sem_init(&wrt,0,1);
sem_init(&rd,0,1);
int a[10] = {1,2,3,4,5,6,7,8,9,10}; //Just used for numbering the writer and reader
for(int i = 0; i < 5; i++) {
pthread_create(&write[i], NULL, (void *)writer, (void *)&a[i]);
}
for(int i = 0; i < 10; i++) {
pthread_create(&read[i], NULL, (void *)reader, (void *)&a[i]);
}
for(int i = 0; i < 5; i++) {
pthread_join(write[i], NULL);
}
for(int i = 0; i < 10; i++) {
pthread_join(read[i], NULL);
}
pthread_mutex_destroy(&mutex1);
pthread_mutex_destroy(&mutex2);
sem_destroy(&wrt);
sem_destroy(&rd);
return 0;
}
Output (for both is the same. I think if writer had priority it will change first, then will be read):
Alternative Semantics
Much of what you want to do can probably be accomplished with less overhead. For example, in the classic reader-writer problem, readers shouldn’t need to block other readers.
You might be able to replace the reader-writer pattern with a publisher-consumer pattern that manages pointers to blocks of data with acquire-consume memory ordering. You only need locking at all if one thread needs to update the same block of memory after it was originally written.
POSIX and Linux have an implementation of reader-writer locks in the system library, which were designed to avoid starvation. This is most likely the high-level construct you want.
If you still want to implement your own, one implementation would use a count of current readers, a count of pending writers and a flag that indicates whether a write is in progress. It packs all these values into an atomic bitfield that it updates with a compare-and-swap.
Reader threads would retrieve the value, check whether there are any starving writers waiting, and if not, increment the count of readers. If there are writers, it backs off (perhaps spinning and yielding the CPU, perhaps sleeping on a condition variable). If there is a write in progress, it waits for that to complete. If it sees only other reads in progress, it goes ahead.
Writer threads would check if there are any reads or writes in progress. If so, they increment the count of waiting writers, and wait. If not, they set the write-in-progress bit and proceed.
Packing all these fields into the same atomic bitfield guarantees that no thread will think it’s safe to use the buffer while another thread thinks it’s safe to write: if two threads try to update the state at the same time, one will always fail.
If You Stick With Semaphores
You can still have reader threads check sem_getvalue() on the writer semaphore, and back off if they see any starved writers are waiting. One method would be to wait on a condition variable that threads signal when they are done with the buffer. A reader that sees that it holds the mutex while writers are waiting can try to wake up one writer thread and go back to sleep, and a reader that sees only other readers are waiting can wake up a reader, which will wake up the next reader, and so on.
Considering the following code:
#define _XOPEN_SOURCE 600
#define _DEFAULT_SOURCE
#include <pthread.h>
#include <stdatomic.h>
#include <stdint.h>
#include <unistd.h>
#include <stdio.h>
#include <stdlib.h>
#define ENTRY_NUM 100
struct value {
pthread_mutex_t mutex;
int i;
};
struct entry {
atomic_uintptr_t val;
};
struct entry entries[ENTRY_NUM];
void* thread1(void *arg)
{
for (int i = 0; i != ENTRY_NUM; ++i) {
struct value *val = (struct value*) atomic_load(&entries[i].val);
if (val == NULL)
continue;
pthread_mutex_lock(&val->mutex);
printf("%d\n", val->i);
pthread_mutex_unlock(&val->mutex);
}
return NULL;
}
void* thread2(void *arg)
{
/*
* Do some costy operations before continuing.
*/
usleep(1);
for (int i = 0; i != ENTRY_NUM; ++i) {
struct value *val = (struct value*) atomic_load(&entries[i].val);
pthread_mutex_lock(&val->mutex);
atomic_store(&entries[i].val, (uintptr_t) NULL);
pthread_mutex_unlock(&val->mutex);
pthread_mutex_destroy(&val->mutex);
free(val);
}
return NULL;
}
int main() {
for (int i = 0; i != ENTRY_NUM; ++i) {
struct value *val = malloc(sizeof(struct value));
pthread_mutex_init(&val->mutex, NULL);
val->i = i;
atomic_store(&entries[i].val, (uintptr_t) val);
}
pthread_t ids[2];
pthread_create(&ids[0], NULL, thread1, NULL);
pthread_create(&ids[1], NULL, thread2, NULL);
pthread_join(ids[0], NULL);
pthread_join(ids[1], NULL);
return 0;
}
Suppose in function thread1, entries[i].val is loaded, then the scheduler schedule the process to sleep.
Then thread2 awakes from usleep, since ((struct val*) entries[0].val)->mutex aren't locked, thread2 locks it, stores NULL to entries[0].val and free the original of entries[0].val.
Now, is that a race condition? If so, how to avoid this without locking entries or entries[0]?
You are correct my friend, there is indeed a race condition in such code.
Let me open by overall and saying that thread is prawn to race conditions by definition and this is also correct for any other library implemented thread which is unacknowledged by your compiler ahead of compilation time.
Regarding your specific example, yes as you've explained yourself since we can not assume when does your scheduler go into action, thread1 could atomic load your entries, context switch to thread2, which will then free these entries before thread1 gets processor time again. How do you prevent or avoid such race conditions? avoid accessing them without locking them, even though atomic load is an "atomic read" you are logically allowing other threads to access these entries. the entire code scope of both thread1 and thread2 should be protected with a mutex. despite using atomic_load, you are just guaranteeing that at that atomic time, no other accesses to that entry will be made, but during the time between the atomic_load and your first calling to pthread_mutex_lock context switches can indeed occur! as you've mentioned yourself, this is both bad practice and logically wrong. - hence as I've already stated, you should protect the entire scope with pthread_mutex_lock
In general, as I've stated in the beginning of this, considering that your compiler is unaware to the concept of threads during compilation times, it is very sensitive to race conditions that you may not even be aware of, - e.g.: when accessing different areas of some shared memory, the compiler does not take into consideration that other threads may exist and accesses some memory as it desires, and may affect different areas of memories during, even though logically the code itself doesn't, and the "correctness" of the code is valid.
there was some paper published on this called
Threads Cannot be Implemented as a Library by Hans-J Boehm I highly suggest you read it, I promise it will increase your understanding of race conditions and threads using pthread at general!
I want to print numbers using two threads T1 and T2. T1 should print numbers like 1,2,3,4,5 and then T2 should print 6,7,8,9,10 and again T1 should start and then T2 should follow. It should print from 1....100. I have two questions.
How can I complete the task using threads and one global variable?
How can I schedule threads in desired order in linux?
How can I schedule threads in desired order in linux?
You need to use locking primitives, such as mutexes or condition variables to influence scheduling order of threads. Or you have to make your threads work independently on the order.
How can I complete the task using threads and one global variable?
If you are allowed to use only one variable, then you can't use mutex (it will be the second variable). So the first thing you must do is to declare your variable atomic. Otherwise compiler may optimize your code in such a way that one thread will not see changes made by other thread. And for such simple code that you want, it will do so by caching variable on register. Use std::atomic_int. You may find an advice to use volatile int, but nowdays std::atomic_int is a more direct approach to specify what you want.
You can't use mutexes, so you can't make your threads wait. They will be constantly running and wasting CPU. But that's seems OK for the task. So you will need to write a spinlock. Threads will wait in a loop constantly checking the value. If value % 10 < 5 then first thread breaks the loop and does incrementing, otherwise second thread does the job.
Because the question looks like a homework I don't show here any code samples, you need to write them yourself.
1: the easiest way is to use mutexes.
this is a basic implementation with a unfair/undefined sheduling
int counter=1;
pthread_mutex_t mutex; //needs to be initialised
void incrementGlobal() {
for(int i=0;i<5;i++){
counter++;
printf("%i\n",counter);
}
}
T1/T2:
pthread_mutex_lock(&mutex);
incrementGlobal();
pthread_mutex_unlock(&mutex);
2: the correct order can be archieved with conditional-variables:
(but this needs more global-variables)
global:
int num_thread=2;
int current_thread_id=0;
pthread_cond_t cond; //needs to be initialised
T1/T2:
int local_thread_id; // the ID of the thread
while(true) {
phread_mutex_lock(&mutex);
while (current_thread_id != local_thread_id) {
pthread_cond_wait(&cond, &mutex);
}
incrementGlobal();
current_thread_id = (current_thread_id+1) % num_threads;
pthread_cond_broadcast(&cond);
pthread_mutex_unlock(&mutex);
}
How can I complete the task using threads and one global variable?
If you are using Linux, you can use POSIX library for threading in C programming Language. The library is <pthread.h>. However, as an alternative, a quite portable and relatively non-intrusive library, well supported on gcc and g++ and (with an old version) on MSVC is openMP.
It is not standard C and C++, but OpenMP itself is a standard.
How can I schedule threads in desired order in linux?
To achieve your desired operation of printing, you need to have a global variable which can be accessed by two threads of yours. Both threads take turns to access the global variable variable and perform operation (increment and print). However, to achieve desired order, you need to have a mutex. A mutex is a mutual exclusion semaphore, a special variant of a semaphore that only allows one locker at a time. It can be used when you have an instance of resource (global variable in your case) and that resource is being shared by two threads. A thread after locking that mutex can have exclusive access to an instance of resource and after completing it's operation, a thread should release the mutex for other threads.
You can start with threads and mutex in <pthread.h> from here.
The one of the possible solution to your problem could be this program is given below. However, i would suggest you to find try it yourself and then look at my solution.
#include <stdio.h>
#include <pthread.h>
#include <unistd.h>
pthread_mutex_t lock;
int variable=0;
#define ONE_TIME_INC 5
#define MAX 100
void *thread1(void *arg)
{
while (1) {
pthread_mutex_lock(&lock);
printf("Thread1: \n");
int i;
for (i=0; i<ONE_TIME_INC; i++) {
if (variable >= MAX)
goto RETURN;
printf("\t\t%d\n", ++variable);
}
printf("Thread1: Sleeping\n");
pthread_mutex_unlock(&lock);
usleep(1000);
}
RETURN:
pthread_mutex_unlock(&lock);
return NULL;
}
void *thread2(void *arg)
{
while (1) {
pthread_mutex_lock(&lock);
printf("Thread2: \n");
int i;
for (i=0; i<ONE_TIME_INC; i++) {
if (variable >= MAX)
goto RETURN;
printf("%d\n", ++variable);
}
printf("Thread2: Sleeping\n");
pthread_mutex_unlock(&lock);
usleep(1000);
}
RETURN:
pthread_mutex_unlock(&lock);
return NULL;
}
int main()
{
if (pthread_mutex_init(&lock, NULL) != 0) {
printf("\n mutex init failed\n");
return 1;
}
pthread_t pthread1, pthread2;
if (pthread_create(&pthread1, NULL, thread1, NULL))
return -1;
if (pthread_create(&pthread2, NULL, thread2, NULL))
return -1;
pthread_join(pthread1, NULL);
pthread_join(pthread2, NULL);
pthread_mutex_destroy(&lock);
return 0;
}
Writing my basic programs on multi threading and I m coming across several difficulties.
In the program below if I give sleep at position 1 then value of shared data being printed is always 10 while keeping sleep at position 2 the value of shared data is always 0.
Why this kind of output is coming ?
How to decide at which place we should give sleep.
Does this mean that if we are placing a sleep inside the mutex then the other thread is not being executed at all thus the shared data being 0.
#include <pthread.h>
#include <stdio.h>
#include <stdlib.h>
#include<unistd.h>
pthread_mutex_t lock;
int shared_data = 0;
void * function(void *arg)
{
int i ;
for(i =0; i < 10; i++)
{
pthread_mutex_lock(&lock);
shared_data++;
pthread_mutex_unlock(&lock);
}
pthread_exit(NULL);
}
int main()
{
pthread_t thread;
void * exit_status;
int i;
pthread_mutex_init(&lock, NULL);
i = pthread_create(&thread, NULL, function, NULL);
for(i =0; i < 10; i++)
{
sleep(1); //POSITION 1
pthread_mutex_lock(&lock);
//sleep(1); //POSITION 2
printf("Shared data value is %d\n", shared_data);
pthread_mutex_unlock(&lock);
}
pthread_join(thread, &exit_status);
pthread_mutex_destroy(&lock);
}
When you sleep before you lock the mutex, then you're giving the other thread plenty of time to change the value of the shared variable. That's why you're seeing a value of "10" with the 'sleep' in position #1.
When you grab the mutex first, you're able to lock it fast enough that you can print out the value before the other thread has a chance to modify it. The other thread sits and blocks on the pthread_mutex_lock() call until your main thread has finished sleeping and unlocked it. At that point, the second thread finally gets to run and alter the value. That's why you're seeing a value of "0" with the 'sleep' at position #2.
This is a classic case of a race condition. On a different machine, the same code might not display "0" with the sleep call at position #2. It's entirely possible that the second thread has the opportunity to alter the value of the variable once or twice before your main thread locks the mutex. A mutex can ensure that two threads don't access the same variable at the same time, but it doesn't have any control over the order in which the two threads access it.
I had a full explanation here but ended up deleting it. This is a basic synchronization problem and you should be able to trace and identify it before tackling anything more complicated.
But I'll give you a hint: It's only the sleep() in position 1 that matters; the other one inside the lock is irrelevant as long as it doesn't change the code outside the lock.
I'm writing a program that simply demonstrates writing and reading from a bounded buffer as it outputs what value it expected and what value actually was read. When I define N as a low value, the program executes as expected. However, when I increase the value, I start to see unexpected results. From what I understand, I am creating data races by using two threads.
By looking at the output, I think I've narrowed it down to about three examples of data races as follows:
One thread writes to the buffer while another thread reads.
Two threads simultaneously write to the buffer.
Two threads simultaneously read from the buffer.
Below is my code. The formatting is really strange for the #include statements, so I left them out.
#define BUFFSIZE 10000
#define N 10000
int Buffer[BUFFSIZE];
int numOccupied = 0; //# items currently in the buffer
int firstOccupied = 0; //where first/next value or item is to be found or placed
//Adds a given value to the next position in the buffer
void buffadd(int value)
{
Buffer[firstOccupied + numOccupied++] = value;
}
int buffrem()
{
numOccupied--;
return(Buffer[firstOccupied++]);
}
void *tcode1(void *empty)
{
int i;
//write N values into the buffer
for(i=0; i<N; i++)
buffadd(i);
}
void *tcode2(void *empty)
{
int i, val;
//Read N values from the buffer, checking the value read with what is expected for testing
for(i=0; i<N; i++)
{
val = buffrem();
if(val != i)
printf("tcode2: removed %d, expected %d\n", val, i);
}
}
main()
{
pthread_t tcb1, tcb2;
pthread_create(&tcb1, NULL, tcode1, NULL);
pthread_create(&tcb2, NULL, tcode2, NULL);
pthread_join(tcb1, NULL);
pthread_join(tcb2, NULL);
}
So here are my questions.
Where (in my code) do these data races occur?
How do I fix them?
Use a mutex to synchronize access to your shared data structure. You will need the following:
pthread_mutex_t mutex;
pthread_mutex_init(&mutex, NULL);
pthread_mutex_lock(&mutex);
pthread_mutex_unlock(&mutex);
pthread_mutex_destroy(&mutex);
As a basic principle, you lock the mutex before reading/writing to the data structure shared between threads, and unlock it afterwards. In this case, the Buffer plus metadata numOccupied and firstOccupied are your shared data structure you need to protect. So in buffadd() and buffrem(), lock the mutex at the beginning, unlock it at the end. And in main(), initialize the mutex before starting the threads, and destroy it after joining.
You have races because both threads access the same global vars numOccupied and firstOccupied. You need to synchronize access to the variables with some form of locking. For example, you could use a semaphore or mutex to lock access to the shared global state while doing add / remove operations.