pthread_mutex_lock works only with sleep - c

I pass a struct in pthread_create which contains a char* and I lock the main and the thread with mutexes so I can protect this string because when the second thread will be created the string will change and the first thread will use the second string and not the first. Here is the code:
main.c
while( th_num < th_size )
{
pthread_mutex_lock(&lock);
received = 0;
/* Read the desired readable size */
if( read(newsock, &size, sizeof(size)) < 0 )
{ perror("Read"); exit(1); }
/* Read all data */
while( received < size )
{
if( (nread = read(newsock, buffer + received, size - received)) < 0 )
{ perror("Read"); exit(1); }
received += nread;
}
printf("Received string: %s\n",buffer);
Q->receiver = (char*) malloc(sizeof(char)*strlen(buffer)+1);
strncpy(Q->receiver, buffer, strlen(buffer)+1);
if( (err = pthread_create(&thread_server[th_num], NULL, thread_start, (void*) Q)) == true )
{ show_error("pthread_create", err); }
/* -------------------------------------------------- */
th_num++;
pthread_mutex_unlock(&lock);
usleep(500);
}
pthread_server.c
pthread_mutex_lock(&lock);
/*
do some stuff here
*/
pthread_mutex_unlock(&lock);
The program works fine but the problem is that it only works if I put usleep(500). My guess is that the thread cant lock the mutex in time so it needs sleep to do this right. Is there a way to do it without usleep()?

Assuming I don't understand why you need to call pthread_create(); in a mutual exclusion portion of code, your problems is:
you use threads but the flow of your program is approaching to be sequential because of the large mutual exclusion portion of code.
Let X a generic thread in your program.
Without the usleep(500); when the X thread finish it releases the mutex with pthread_mutex_unlock(&lock); but afterwards the thread X reacquires the lock so no one else can access in the mutual exclusion portion of code.
Now I don't know what your shared data is, so I can only suggest you:
1) Reduce the mutual exclusion portion of code, only use it when you access to a shared data;
2) Rethink about your program structure.

Related

producer / consumer task. Problem with correct writing to shared buffer

I'm working on a project that solves the classic problem of producer / consumer scheduling.
Linux Open Suse 42.3 Leep, API System V, C language
The project consists of three programs: producer, consumer and scheduler.
The purpose of schedulers is to create 3 semaphores, shared memory in which there is a buffer (array) in which write (producer) and read (consumer) and to run n producer and m consumer processes.
Each producer must perform k write cycles to the buffer, and the consumer must perform k read cycles.
3 semaphores were used: mutex, empty and full. The value of the full semaphore is used in the program as an index in the array.
The problem is that: for example, when the buffer size is 3, producers write 4 portions of data, when the buffer size is 4 - 5 portions of data (although there should be 4) ...
Consumers read normally.
In addition, the program does not behave predictably when calling get_semVal fucntion.
Please help, I will be very, very grateful for the answer.
producer
#define BUFFER_SIZE 3
#define MY_RAND_MAX 99 // Highest integer for random number generator
#define LOOP 3 //the number of write / read cycles for each process
#define DATA_DIMENSION 4 // size of portion of data for 1 iteration
struct Data {
int buf[DATA_DIMENSION];
};
typedef struct Data buffer_item;
buffer_item buffer[BUFFER_SIZE];
void P(int semid)
{
struct sembuf op;
op.sem_num = 0;
op.sem_op = -1;
op.sem_flg = 0;
semop(semid,&op,1);
}
void V(int semid)
{
struct sembuf op;
op.sem_num = 0;
op.sem_op = +1;
op.sem_flg = 0;
semop(semid,&op,1);
}
void Init(int semid,int index,int value)
{
semctl(semid,index,SETVAL,value);
}
int get_semVal(int sem_id)
{
int value = semctl(sem_id,0,GETVAL,0);
return value;
}
int main()
{
sem_mutex = semget(KEY_MUTEX,1,0);
sem_empty = semget(KEY_EMPTY,1,0);
sem_full = semget(KEY_FULL,1,0);
srand(time(NULL));
const int SIZE = sizeof(buffer[BUFFER_SIZE]);
shm_id = shmget(KEY_SHARED_MEMORY,SIZE, 0);
int i=0;
buffer_item *adr;
do {
buffer_item nextProduced;
P(sem_empty);
P(sem_mutex);
//prepare portion of data
for(int j=0;j<DATA_DIMENSION;j++)
{
nextProduced.buf[j]=rand()%5;
}
adr = (buffer_item*)shmat(shm_id,NULL,0);
int full_value = get_semVal(sem_full);//get index of array
printf("-----%d------\n",full_value-1);//it’s for test the index of array in buffer
// write the generated portion of data by index full_value-1
adr[full_value-1].buf[0] = nextProduced.buf[0];
adr[full_value-1].buf[1] = nextProduced.buf[1];
adr[full_value-1].buf[2] = nextProduced.buf[2];
adr[full_value-1].buf[3] = nextProduced.buf[3];
shmdt(adr);
printf("producer %d produced %d %d %d %d\n", getpid(), nextProduced.buf[0],nextProduced.buf[1],nextProduced.buf[2],nextProduced.buf[3]);
V(sem_mutex);
V(sem_full);
i++;
} while (i<LOOP);
V(sem_empty);
sleep(1);
}
consumer
…
int main()
{
sem_mutex = semget(KEY_MUTEX,1,0);
sem_empty = semget(KEY_EMPTY,1,0);
sem_full = semget(KEY_FULL,1,0);
srand(time(NULL));
const int SIZE = sizeof(buffer[BUFFER_SIZE]);
shm_id = shmget(KEY_SHARED_MEMORY,SIZE,0);
int i=0;
buffer_item *adr;
do
{
buffer_item nextConsumed;
P(sem_full);
P(sem_mutex);
int full_value = get_semVal(sem_full);
adr = (buffer_item*)shmat(shm_id,NULL,0);
for(int i=0;i<BUFFER_SIZE;i++)
{
printf("--%d %d %d %d\n",adr[i].buf[0],adr[i].buf[1],adr[i].buf[2],adr[i].buf[3]);
}
for(int i=0;i<BUFFER_SIZE;i++)
{
buffer[i].buf[0] = adr[i].buf[0];
buffer[i].buf[1] = adr[i].buf[1];
buffer[i].buf[2] = adr[i].buf[2];
buffer[i].buf[3] = adr[i].buf[3];
}
tab(nextConsumed);
nextConsumed.buf[0]=buffer[full_value-1].buf[0];
nextConsumed.buf[1]=buffer[full_value-1].buf[1];
nextConsumed.buf[2]=buffer[full_value-1].buf[2];
nextConsumed.buf[3]=buffer[full_value-1].buf[3];
// Set buffer to 0 since we consumed that item
for(int j=0;j<DATA_DIMENSION;j++)
{
buffer[full_value-1].buf[j]=0;
}
for(int i=0;i<BUFFER_SIZE;i++)
{
adr[i].buf[0]=buffer[i].buf[0];
adr[i].buf[1]=buffer[i].buf[1];
adr[i].buf[2]=buffer[i].buf[2];
adr[i].buf[3]=buffer[i].buf[3];
}
shmdt(adr);
printf("consumer %d consumed %d %d %d %d\n", getpid() ,nextConsumed.buf[0],nextConsumed.buf[1],nextConsumed.buf[2],nextConsumed.buf[3]);
V(sem_mutex);
// increase empty
V(sem_empty);
i++;
} while (i<LOOP);
V(sem_full);
sleep(1);
}
Scheduler
…
struct Data {
int buf[DATA_DIMENSION];
};
typedef struct Data buffer_item;
buffer_item buffer[BUFFER_SIZE];
struct TProcList
{
pid_t processPid;
};
typedef struct TProcList ProcList;
…
ProcList createProcess(char *name)
{
pid_t pid;
ProcList a;
pid = fork();
if (!pid){
kill(getpid(),SIGSTOP);
execl(name,name,NULL);
exit(0);
}
else if(pid){
a.processPid=pid;
}
else
cout<<"error forking"<<endl;
return a;
}
int main()
{
sem_mutex = semget(KEY_MUTEX,1,IPC_CREAT|0600);
sem_empty = semget(KEY_EMPTY,1,IPC_CREAT|0600);
sem_full = semget(KEY_FULL,1,IPC_CREAT|0600);
Init(sem_mutex,0,1);//unlock mutex
Init(sem_empty,0,BUFFER_SIZE);
Init(sem_full,0,0);//unlock empty
const int SIZE = sizeof(buffer[BUFFER_SIZE]);
shm_id = shmget(KEY_SHARED_MEMORY,SIZE,IPC_CREAT|0600);
buffer_item *adr;
adr = (buffer_item*)shmat(shm_id,NULL,0);
for(int i=0;i<BUFFER_SIZE;i++)
{
buffer[i].buf[0]=0;
buffer[i].buf[1]=0;
buffer[i].buf[2]=0;
buffer[i].buf[3]=0;
}
for(int i=0;i<BUFFER_SIZE;i++)
{
adr[i].buf[0] = buffer[i].buf[0];
adr[i].buf[1] = buffer[i].buf[1];
adr[i].buf[2] = buffer[i].buf[2];
adr[i].buf[3] = buffer[i].buf[3];
}
int consumerNumber = 2;
int produserNumber = 2;
ProcList producer_pids[produserNumber];
ProcList consumer_pids[consumerNumber];
for(int i=0;i<produserNumber;i++)
{
producer_pids[i]=createProcess("/home/andrey/build-c-unknown-Debug/c");//create sleeping processes
}
for(int i=0;i<consumerNumber;i++)
{
consumer_pids[i]=createProcess("/home/andrey/build-p-unknown-Debug/p");
}
sleep(3);
for(int i=0;i<produserNumber;i++)
{
kill(producer_pids[i].processPid,SIGCONT);//continue processes
sleep(1);
}
for(int i=0;i<consumerNumber;i++)
{
kill(consumer_pids[i].processPid,SIGCONT);
sleep(1);
}
for(int i=0;i<produserNumber;i++)
{
waitpid(producer_pids[i].processPid,&stat,WNOHANG);//wait
}
for(int i=0;i<consumerNumber;i++)
{
waitpid(consumer_pids[i].processPid,&stat,WNOHANG);
}
shmdt(adr);
semctl(sem_mutex,0,IPC_RMID);
semctl(sem_full,0,IPC_RMID);
semctl(sem_empty,0,IPC_RMID);
}
It is not fun to try and unravel uncommented code someone else has written, so instead, I'll explain a verified working scheme.
(Note that comments should always explain programmer intent or idea, and never what the code does; we can read the code to see what it does. The problem is, we need to first understand the programmer idea/intent first, before we can compare that to the implementation. Without comments, I would need to first read the code to try and guess at the intent, then compare that to the code itself; it's like double the work.)
(I suspect OP's underlying problem is trying to use semaphore values as buffer indexes, but didn't pore through all of the code to be 100% certain.)
Let's assume the shared memory structure is something like the following:
struct shared {
sem_t lock; /* Initialized to value 1 */
sem_t more; /* Initialized to 0 */
sem_t room; /* Initialized to MAX_ITEMS */
size_t num_items; /* Initialized to 0 */
size_t next_item; /* Initialized to 0 */
item_type item[MAX_ITEMS];
};
and we have struct shared *mem pointing to the shared memory area.
Note that you should, at runtime, include <limits.h>, and verify that MAX_ITEMS <= SEM_VALUE_MAX. Otherwise MAX_ITEMS is too large, and this semaphore scheme may fail. (SEM_VALUE_MAX on Linux is usually INT_MAX, so big enough, but it may vary. And, if you use -O to optimize when compiling, the check will be optimized completely away. So it is a very cheap and reasonable check to have.)
The mem->lock semaphore is used like a mutex. That is, to lock the structure for exclusive access, a process waits on it. When it is done, it posts on it.
Note that while sem_post(&(mem->lock)) will always succeed (ignoring bugs like mem being NULL or pointing to uninitialized memory or having been overwritten with garbage), technically, sem_wait() can be interrupted by a signal delivery to an userspace handler installed without SA_RESTART flag. This is why I recommend using a static inline helper function instead of sem_wait():
static inline int semaphore_wait(sem_t *const s)
{
int result;
do {
result = sem_wait(s);
} while (result == -1 && errno == EINTR);
return result;
}
static inline int semaphore_post(sem_t *const s)
{
return sem_post(s);
}
In cases where signal delivery should not interrupt waiting on the semaphore, you use semaphore_wait(). If you do want a signal delivery to interrupt waiting on a semaphore, you use sem_wait(); if it returns -1 with errno == EINTR, the operation was interrupted due to signal delivery, and the semaphore wasn't actually decremented. (Many other low-level functions, like read(), write(), send(), recv(), can be interrupted in the exact same way; they can also just return a short count, in case the interruption occurred part way.)
The semaphore_post() is just a wrapper, so that you can use "matching` post and wait operations. Doing that sort of "useless" wrappers does help understand the code, you see.
The item[] array is used as a circular queue. The num_items indicates the number of items in it. If num_items > 0, the next item to be consumed is item[next_item]. If num_items < MAX_ITEMS, the next item to be produced is item[(next_item + num_items) % MAX_ITEMS].
The % is the modulo operator. Here, because next_item and num_items are always positive, (next_item + num_items) % MAX_ITEMS is always between 0 and MAX_ITEMS - 1, inclusive. This is what makes the buffer circular.
When a producer has constructed a new item, say item_type newitem;, and wants to add it to the shared memory, it basically does the following:
/* Omitted: Initialize and fill in 'newitem' members */
/* Wait until there is room in the buffer */
semaphore_wait(&(mem->room));
/* Get exclusive access to the structure members */
semaphore_wait(&(mem->lock));
mem->item[(mem->next_item + mem->num_items) % MAX_ITEMS] = newitem;
mem->num_items++;
sem_post(&(mem->more));
semaphore_post(&(mem->lock));
The above is often called enqueue, because it appends an item to a queue (which happends to be implemented via a circular buffer).
When a consumer wants to consume an item (item_type nextitem;) from the shared buffer, it does the following:
/* Wait until there are items in the buffer */
semaphore_wait(&(mem->more));
/* Get exclusive access to the structure members */
semaphore_wait(&(mem->lock));
nextitem = mem->item[mem->next_item];
mem->next_item = (mem->next_item + 1) % MAX_ITEMS;
mem->num_items = mem->num_items - 1;
semaphore_post(&(mem->room));
mem->item[(mem->next_item + mem->num_items) % MAX_ITEMS] = newitem;
mem->num_items++;
sem_post(&(mem->more));
semaphore_post(&(mem->lock));
/* Omitted: Do work on 'nextitem' here. */
This is often called dequeue, because it obtains the next item from the queue.
I would recommend you first write a single-process test case, which enqueues MAX_ITEMS, then dequeues them, and verifies the semaphore values are back to initial values. That is not a guarantee of correctness, but it takes care of the most typical bugs.
In practice, I would personally write the queueing functions as static inline helpers in the same header file that describes the shared memory structure. Pretty much
static inline int shared_get(struct shared *const mem, item_type *const into)
{
int err;
if (!mem || !into)
return errno = EINVAL; /* Set errno = EINVAL, and return EINVAL. */
/* Wait for the next item in the buffer. */
do {
err = sem_wait(&(mem->more));
} while (err == -1 && errno == EINTR);
if (err)
return errno;
/* Exclusive access to the structure. */
do {
err = sem_wait(&(mem->lock));
} while (err == -1 && errno == EINTR);
/* Copy item to caller storage. */
*into = mem->item[mem->next_item];
/* Update queue state. */
mem->next_item = (mem->next_item + 1) % MAX_ITEMS;
mem->num_items--;
/* Account for the newly freed slot. */
sem_post(&(mem->room));
/* Done. */
sem_post(&(mem->lock));
return 0;
}
and
static inline int shared_put(struct shared *const mem, const item_type *const from)
int err;
if (!mem || !into)
return errno = EINVAL; /* Set errno = EINVAL, and return EINVAL. */
/* Wait for room in the buffer. */
do {
err = sem_wait(&(mem->room));
} while (err == -1 && errno == EINTR);
if (err)
return errno;
/* Exclusive access to the structure. */
do {
err = sem_wait(&(mem->lock));
} while (err == -1 && errno == EINTR);
/* Copy item to queue. */
mem->item[(mem->next_item + mem->num_items) % MAX_ITEMS] = *from;
/* Update queue state. */
mem->num_items++;
/* Account for the newly filled slot. */
sem_post(&(mem->more));
/* Done. */
sem_post(&(mem->lock));
return 0;
}
but note that I wrote these from memory, and not copy-pasted from my test program, because I want you to learn and not to just copy-paste code from others without understanding (and being suspicious of) it.
Why do we need separate counters (first_item, num_items) when we have the semaphores, with corresponding values?
Because we cannot capture the semaphore value at the point where sem_wait() succeeded/continued/stopped blocking.
For example, initially the room semaphore is initialized to MAX_ITEMS, so up to that many producers can run in parallel. Any one of them running sem_getvalue() immediately after sem_wait() will get some later value, not the value or transition that caused sem_wait() to return. (Even with SysV semaphores you cannot obtain the semaphore value that caused wait to return for this process.)
So, instead of indexes or counters to the buffer, we think of the more semaphore as having the value of how many times one can dequeue from the buffer without blocking, and room as having the value of how many times one can enqueue to the buffer without blocking. The lock semaphore grants exclusive access, so that we can modify the shared memory structures (well, next_item and num_items) atomically, without different processes trying to change the values at the same time.
I am not 100% certain that this is the best or optimum pattern, this is one of the most commonly used ones. It is not as robust as I'd like: for each increment (of one) in num_items, one must post on more exactly once; and for each decrement (of one) in num_items, one must increment next_item by exactly one and post on room exactly once, or the scheme falls apart.
There is one final wrinkle, though:
How do producers indicate they are done?
How would the scheduler tell producers and/or consumers to stop?
My preferred solution is to add a flag into the shared memory structure, say unsigned int status;, with specific bit masks telling the producers and consumers what to do, that is examined immediately after waiting on the lock:
#define STOP_PRODUCING (1 << 0)
#define STOP_CONSUMING (1 << 1)
static inline int shared_get(struct shared *const mem, item_type *const into)
{
int err;
if (!mem || !into)
return errno = EINVAL; /* Set errno = EINVAL, and return EINVAL. */
/* Wait for the next item in the buffer. */
do {
err = sem_wait(&(mem->more));
} while (err == -1 && errno == EINTR);
if (err)
return errno;
/* Exclusive access to the structure. */
do {
err = sem_wait(&(mem->lock));
} while (err == -1 && errno == EINTR);
/* Need to stop consuming? */
if (mem->state & STOP_CONSUMING) {
/* Ensure all consumers see the state immediately */
sem_post(&(mem->more));
sem_post(&(mem->lock));
/* ENOMSG == please stop. */
return errno = ENOMSG;
}
/* Copy item to caller storage. */
*into = mem->item[mem->next_item];
/* Update queue state. */
mem->next_item = (mem->next_item + 1) % MAX_ITEMS;
mem->num_items--;
/* Account for the newly freed slot. */
sem_post(&(mem->room));
/* Done. */
sem_post(&(mem->lock));
return 0;
}
static inline int shared_put(struct shared *const mem, const item_type *const from)
int err;
if (!mem || !into)
return errno = EINVAL; /* Set errno = EINVAL, and return EINVAL. */
/* Wait for room in the buffer. */
do {
err = sem_wait(&(mem->room));
} while (err == -1 && errno == EINTR);
if (err)
return errno;
/* Exclusive access to the structure. */
do {
err = sem_wait(&(mem->lock));
} while (err == -1 && errno == EINTR);
/* Time to stop? */
if (mem->state & STOP_PRODUCING) {
/* Ensure all producers see the state immediately */
sem_post(&(mem->lock));
sem_post(&(mem->room));
/* ENOMSG == please stop. */
return errno = ENOMSG;
}
/* Copy item to queue. */
mem->item[(mem->next_item + mem->num_items) % MAX_ITEMS] = *from;
/* Update queue state. */
mem->num_items++;
/* Account for the newly filled slot. */
sem_post(&(mem->more));
/* Done. */
sem_post(&(mem->lock));
return 0;
}
which return ENOMSG to the caller if the caller should stop. When the state is changed, one should of course be holding the lock. When adding STOP_PRODUCING, one should also post on the room semaphore (once) to start a "cascade" so all producers stop; and when adding STOP_CONSUMING, post on the more semaphore (once) to start the consumer stop cascade. (Each of them will post on it again, to ensure each producer/consumer sees the state as soon as possible.)
There are other schemes, though; for example signals (setting a volatile sig_atomic_t flag), but it is generally hard to ensure there are no race windows: a process checking the flag just before it is changed, and then blocking on a semaphore.
In this scheme, it would be good to verify that both MAX_ITEMS + NUM_PRODUCERS <= SEM_VALUE_MAX and MAX_ITEMS + NUM_CONSUMERS <= SEM_VALUE_MAX, so that even during the stop cascades, the semaphore value will not overflow.

C Reader Writer Program, One reader isnt reading all the data

I am working on a reader/writer program where there is one writer to n readers. I am having an issue where if multiple readers are in, like the screenshot posted below, then the entire message from shared memory isnt displayed.
Output:
Enter a Message: Test
Reader1: Test
Reader2: Test
Writer: test test
Reader1: test
Reader2: test test
Writer:
Readers:
I have tried to add a count variable because I assume that the writers turn is being flagged before all readers have the ability to print and its making the writer then exit the nested while() in the writer and stop the readers from printing.
Any suggestions on to make the readers both print, whether it be a flag or some sort of count? Attached below are also screenshots of the writer and reader loops.
Reader:
int main() {
DataShared data;
data.turn = 0;
signal(SIGINT, sigHandler);
//generates key
key = ftok("mkey",65);
//returns an identifier in mId
if ((mId = shmget(key, SIZE, IPC_CREAT|S_IRUSR|S_IWUSR)) < 0){
perror("shared memory error");
exit(1);
}
// shmat to attach to shared memory
if((mPtr = shmat(mId, 0, 0)) == (void*) -1) {
perror("Can't attach\n");
exit(1);
}
while(1) {
// request critical section
while(!data.turn && data.count == 0) {
//not time for the reader, check if token is changed.
memcpy(&data, mPtr, sizeof(DataShared));
}
data.count++;
// enter critical section
usleep(1);
fprintf(stderr, "Read from memory: %s\n", data.message);
usleep(1);
// leave critical section
data.count--;
while(data.count > 0){
;
}
data.turn = 0;
memcpy(mPtr, &data, sizeof(DataShared));
};
return 0;
}
Writer:
int main() {
DataShared data;
data.turn = 0;
data.count = 0;
signal(SIGINT, sigHandler);
key = ftok("mkey",65);
if((shmId = shmget(key, SIZE, IPC_CREAT|S_IRUSR|S_IWUSR)) < 0 ) {
perror("Error creating shared memory\n");
exit(1);
}
if((shmPtr = shmat(shmId, 0, 0)) == (void*) -1) {
perror("Can't attach\n");
exit(1);
}
while(1) {
while (data.turn) {
memcpy(&data, shmPtr, sizeof(DataShared));
}
// enter critical section
printf("Enter a message: \n" );
fgets(data.message, 1024, stdin);
// leave critical section
printf("Message written to memory: %s\n", data.message);
data.turn = 1;
memcpy(shmPtr, &data, sizeof(DataShared));
};
return 0;
}
This may not be the explanation of your observation, but what you do is fishy.
You have multiple processes and the OS schedules each process.
First, there is no guarantee that all readers will read the message. It is very well possible that one reader finishes, sets the flag to 0 and copies the data back to shared memory before another reader had a chance to read the data.
Then your data.count. It starts with the local variable data of the writer. there you do not initialize data.count so it has an indeterminate value. In the readers you set it to 0 but it will be overwritten with the value from shared memory (the indeterminate value). You do a ++, later a -- and then wait for it to become 0. How would that ever become zero? That reader could wait forever.

Thread doesn't recognize change in a flag

I Work with couple of threads. all running as long as an exit_flag is set to false.
I Have specific thread that doesn't recognize the change in the flag, and therefor not ending and freeing up its resources, and i'm trying to understand why.
UPDATE: After debugging a bit with gdb, i can see that given 'enough time' the problematic thread does detects the flag change.
My conclusion from this is that not enough time passes for the thread to detect the change in normal run.
How can i 'delay' my main thread, long enough for all threads to detect the flag change, without having to JOIN them? (the use of exit_flag was in an intention NOT to join the threads, as i don't want to manage all threads id's for that - i'm just detaching each one of them, except the thread that handles input).
I've tried using sleep(5) in close_server() method, after the flag changing, with no luck
Notes:
Other threads that loop on the same flag does terminate succesfully
exit_flag declaration is: static volatile bool exit_flag
All threads are reading the flag, flag value is changed only in close_server() method i have (which does only that)
Data race that may occur when a thread reads the flag just before its changed, doesn't matter to me, as long as in the next iteration of the while loop it will read the correct value.
No error occurs in the thread itself (according to strerr & stdout which are 'clean' from error messages (for the errors i handle in the thread)
Ths situation also occurs even when commenting out the entire while((!exit_flag) && (remain_data > 0)) code block - so this is not a sendfile hanging issure
station_info_t struct:
typedef struct station_info {
int socket_fd;
int station_num;
} station_info_t;
Problematic thread code:
void * station_handler(void * arg_p)
{
status_type_t rs = SUCCESS;
station_info_t * info = (station_info_t *)arg_p;
int remain_data = 0;
int sent_bytes = 0;
int song_fd = 0;
off_t offset = 0;
FILE * fp = NULL;
struct stat file_stat;
/* validate station number for this handler */
if(info->station_num < 0) {
fprintf(stderr, "station_handler() station_num = %d, something's very wrong! exiting\n", info->station_num);
exit(EXIT_FAILURE);
}
/* Open the file to send, and get his stats */
fp = fopen(srv_params.songs_names[info->station_num], "r");
if(NULL == fp) {
close(info->socket_fd);
free(info);
error_and_exit("fopen() failed! errno = ", errno);
}
song_fd = fileno(fp);
if( fstat(song_fd, &file_stat) ) {
close(info->socket_fd);
fclose(fp);
free(info);
error_and_exit("fstat() failed! errno = ", errno);
}
/** Run as long as no exit procedure was initiated */
while( !exit_flag ) {
offset = 0;
remain_data = file_stat.st_size;
while( (!exit_flag) && (remain_data > 0) ) {
sent_bytes = sendfile(info->socket_fd, song_fd, &offset, SEND_BUF);
if(sent_bytes < 0 ) {
error_and_exit("sendfile() failed! errno = ", errno);
}
remain_data = remain_data - sent_bytes;
usleep(USLEEP_TIME);
}
}
printf("Station %d handle exited\n", info->station_num);
/* Free \ close all resources */
close(info->socket_fd);
fclose(fp);
free(info);
return NULL;
}
I'll be glad to get some help.
Thanks guys
Well, as stated by user362924 the main issue is that i don't join the threads in my main thread, therefore not allowing them enough time to exit.
A workaround to the matter, if for some reason one wouldn't want to join all threads and dynamically manage thread id's, is to use sleep command in the end of the main thread, for a couple of seconds.
of course this workaround is not good practice and not recommended (to anyone who gets here by google)

Multi-threaded programming : Under what situation does the variable 'iget' is equal to the variable 'iput'?

The Situation
After reading Unix Socket Programming, W.Richard Steven, I'm writing a P2P program in which the main thread creates thread pool in which five sub-threads live. it then monitors 50 sockets with kqueue(). when a event occurs in a specified socket (e.g, receiving data on the socket.), the main thread copies socket descriptor into a shared array and awakes one thread in the thread pool. the sub thread then processes a request from the socket. Also, I have protected the shared array using both mutex variable and conditional variable.
Question
The Author presents the source codes "server/serv08.c" and "server/pthread08.c" in the Section 30.12 and 30.13 in the book, respectively, as if there is no something wrong with this code. But, when I've written a code snippet similar to one author present, thread synchronization doesn't work well. Why does iput become equal to iget in main thread?
Code
--Global variable--
typedef struct tagThread_information
{
int sockfd;
} Thread_information;
Thread_information peer_fds[MAX_THREAD];
pthread_mutex_t peerfd_mutex;
pthread_cond_t peerfd_cond;
pthread_mutex_t STDOUT_mutex;
int iput;
int iget;
--Main thread--
void Wait_for_Handshake(download_session *pSession, int nMaxPeers)
{
struct kevent ev[50], result[50];
int kq, i, nfd;
int c = 1;
if( (kq = kqueue()) == -1)
{
fprintf(stderr, "fail to initialize kqueue.\n");
exit(0);
}
for(i = 0 ; i < nMaxPeers; i++)
{
EV_SET(&ev[i], pSession->Peers[i].sockfd, EVFILT_READ, EV_ADD, 0, 0, 0);
printf("socket : %d\n", (int)ev[i].ident);
}
// create thread pool. initialize mutex and conditional variable.
iput = 0;
iget = 0;
pthread_mutex_init(&STDOUT_mutex, NULL);
pthread_mutex_init(&peerfd_mutex, NULL);
pthread_cond_init(&peerfd_cond, NULL);
// Assume that MAX_THREAD is set to 5.
for(i = 0 ; i < MAX_THREAD; i++)
thread_make(i);
while(1)
{
nfd = kevent(kq, ev, nMaxPeers, result, nMaxPeers, NULL);
if(nfd == -1)
{
fprintf(stderr, "fail to monitor kqueue. error : %d\n", errno);
nMaxPeers = Update_peer(ev, pSession->nPeers);
pSession->nPeers = nMaxPeers;
continue;
}
for(i = 0 ; i < nfd; i++)
{
pthread_mutex_lock(&peerfd_mutex);
peer_fds[iput].sockfd = (int)result[i].ident;
if( ++iput == MAX_THREAD)
iput = 0;
if(iput == iget) // Here is my question.
{
exit(0);
}
pthread_cond_signal(&peerfd_cond);
pthread_mutex_unlock(&peerfd_mutex);
}
}
}
--sub thread--
void * thread_main(void *arg)
{
int connfd, nbytes;
char buf[2048];
for( ; ; )
{
/* get socket descriptor */
pthread_mutex_lock(&peerfd_mutex);
while( iget == iput)
pthread_cond_wait(&peerfd_cond, &peerfd_mutex);
connfd = peer_fds[iget].sockfd;
if ( ++iget == MAX_THREAD )
iget = 0;
pthread_mutex_unlock(&peerfd_mutex);
/* process a request on socket descriptor. */
nbytes = (int)read(connfd, buf, 2048);
if(nbytes == 0)
{
pthread_mutex_lock(&STDOUT_mutex);
printf("\n\nthread %ld, socket : %d, nbytes : %d\n\n\n", (long int)pthread_self(), connfd, nbytes);
printf("socket closed\n\n");
pthread_mutex_unlock(&STDOUT_mutex);
close(connfd);
continue;
}
else if(nbytes == -1)
{
close(connfd);
pthread_mutex_lock(&STDOUT_mutex);
printf("\n\nthread %ld, socket : %d, nbytes : %d\n\n\n", (long int)pthread_self(), connfd, nbytes);
perror("socket error : ");
write(STDOUT_FILENO, buf, nbytes);
printf("\n\n\n\n");
pthread_mutex_unlock(&STDOUT_mutex);
continue;
}
pthread_mutex_lock(&STDOUT_mutex);
printf("\n\nthread %ld, socket : %d, nbytes : %d\n\n\n", (long int)pthread_self(), connfd, nbytes);
write(STDOUT_FILENO, buf, nbytes);
printf("\n\n\n\n");
pthread_mutex_unlock(&STDOUT_mutex);
}
}
In your main thread:
if( ++iput == MAX_THREAD)
iput = 0;// so iput is 0 --> MAX_THREAD
And in your sub thread:
if ( ++iget == MAX_THREAD )
iget = 0;// So iget is 0 --> MAX_THREAD
Since the sub thread and the main thread runs at the "same time",and they are golbal values .the iput maybe equare to iget sometime.
From "UNIX Network Prgramming Volume 1, 2nd Edition", chapter 27.12, page 757, from the annotations to the lines 27-38 of server/serv08.c:
We also check that the iput index has not caught up with the iget index, which indicates that our array is not big enough.
For reference the lines mentioned above (take from here):
27 for ( ; ; ) {
28 clilen = addrlen;
29 connfd = Accept(listenfd, cliaddr, &clilen);
30 Pthread_mutex_lock(&clifd_mutex);
31 clifd[iput] = connfd;
32 if (++iput == MAXNCLI)
33 iput = 0;
34 if (iput == iget)
35 err_quit("iput = iget = %d", iput);
36 Pthread_cond_signal(&clifd_cond);
37 Pthread_mutex_unlock(&clifd_mutex);
38 }
What you have there is a typical circular buffer implementation.
The head and tail pointers/indices point to the same location when the circular buffer is empty. You can see this being tested in the code while (iget == iput) ... which means "while the queue is empty ...".
If, after an insertion at the head of a circular buffer, head points to tail, that is a problem. The buffer has overflowed. It is a problem because now the buffer now looks empty even though it is full.
That is to say, one unused location is reserved in the buffer; if the buffer has 4096 entries, we can only fill 4095. If we fill 4096, it then we have overflow: it looks like an empty circular buffer.
(We could use all 4096 locations if we allowed the index to go from 0 to 8192, using an extra bit to resolve the ambiguity, so that instead of wrapping to zero past 4095, the pointers would keep going to 4096 ... 8191. We would have to remember to access the array modulo 4096, of course. It's a big cost in complexity for the sake of recovering one wasted element.)
It looks like the code bails on circular buffer overflow because it is structured such that this condition cannot happen, and so it constitutes an internal error. The circular buffer overflows when there are too many descriptors being passed from the producer to the consumer in a single bout.
In general, circular buffer code cannot just bail when the buffer is full. Either the insertion operation has to balk and return an error, or it has to block for more space. So this is a special case based on assumptions particular to the example program.

C pthread: How to wake it up after some time?

I would like to wake up a pthread from another pthread - but after some time. I know signal or pthread_signal with pthread_cond_wait can be used to wake another thread, but I can't see a way to schedule this. The situation would be something like:
THREAD 1:
========
while(1)
recv(low priority msg);
dump msg to buffer
THREAD 2:
========
while(1)
recv(high priority msg);
..do a little bit of processing with msg ..
dump msg to buffer
wake(THREAD3, 5-seconds-later); <-- **HOW TO DO THIS? **
//let some msgs collect for at least a 5 sec window.
//i.e.,Don't wake thread3 immediately for every msg rcvd.
THREAD 3:
=========
while(1)
do some stuff ..
Process all msgs in buffer
sleep(60 seconds).
Any simple way to schedule a wakeup (short of creating a 4th thread that wakes up every second and decides if there is a scheduled entry for thread-3 to wakeup). I really don't want to wakeup thread-3 frequently if there are only low priority msgs in queue. Also, since the messages come in bursts (say 1000 high priority messages in a single burst), I don't want to wake up thread-3 for every single message. It really slows things down (as there is a bunch of other processing stuff it does every time it wakes up).
I am using an ubuntu pc.
How about the use of the pthread_cond_t object available through the pthread API ?
You could share such an object within your threads and let them act on it appropriately.
The resulting code should look like this :
/*
* I lazily chose to make it global.
* You could dynamically allocate the memory for it
* And share the pointer between your threads in
* A data structure through the argument pointer
*/
pthread_cond_t cond_var;
pthread_mutex_t cond_mutex;
int wake_up = 0;
/* To call before creating your threads: */
int err;
if (0 != (err = pthread_cond_init(&cond_var, NULL))) {
/* An error occurred, handle it nicely */
}
if (0 != (err = pthread_mutex_init(&cond_mutex, NULL))) {
/* Error ! */
}
/*****************************************/
/* Within your threads */
void *thread_one(void *arg)
{
int err = 0;
/* Remember you can embed the cond_var
* and the cond_mutex in
* Whatever you get from arg pointer */
/* Some work */
/* Argh ! I want to wake up thread 3 */
pthread_mutex_lock(&cond_mutex);
wake_up = 1; // Tell thread 3 a wake_up rq has been done
pthread_mutex_unlock(&cond_mutex);
if (0 != (err = pthread_cond_broadcast(&cond_var))) {
/* Oops ... Error :S */
} else {
/* Thread 3 should be alright now ! */
}
/* Some work */
pthread_exit(NULL);
return NULL;
}
void *thread_three(void *arg)
{
int err;
/* Some work */
/* Oh, I need to sleep for a while ...
* I'll wait for thread_one to wake me up. */
pthread_mutex_lock(&cond_mutex);
while (!wake_up) {
err = pthread_cond_wait(&cond_var, &cond_mutex);
pthread_mutex_unlock(&cond_mutex);
if (!err || ETIMEDOUT == err) {
/* Woken up or time out */
} else {
/* Oops : error */
/* We might have to break the loop */
}
/* We lock the mutex again before the test */
pthread_mutex_lock(&cond_mutex);
}
/* Since we have acknowledged the wake_up rq
* We set "wake_up" to 0. */
wake_up = 0;
pthread_mutex_unlock(&cond_mutex);
/* Some work */
pthread_exit(NULL);
return NULL;
}
If you want your thread 3 to exit the blocking call to pthread_cond_wait() after a timeout, consider using pthread_cond_timedwait() instead (read the man carefully, the timeout value you supply is the ABSOLUTE time, not the amount of time you don't want to exceed).
If the timeout expires, pthread_cond_timedwait() will return an ETIMEDOUT error.
EDIT : I skipped error checking in the lock / unlock calls, don't forget to handle this potential issue !
EDIT² : I reviewed the code a little bit
You can have the woken thread do the wait itself. In the waking thread:
pthread_mutex_lock(&lock);
if (!wakeup_scheduled) {
wakeup_scheduled = 1;
wakeup_time = time() + 5;
pthread_cond_signal(&cond);
}
pthread_mutex_unlock(&lock);
In the waiting thread:
pthread_mutex_lock(&lock);
while (!wakeup_scheduled)
pthread_cond_wait(&cond, &lock);
pthread_mutex_unlock(&lock);
sleep_until(wakeup_time);
pthread_mutex_lock(&lock);
wakeup_scheduled = 0;
pthread_mutex_unlock(&lock);
Why not just compare the current time to one save earlier?
time_t last_uncond_wakeup = time(NULL);
time_t last_recv = 0;
while (1)
{
if (recv())
{
// Do things
last_recv = time(NULL);
}
// Possible other things
time_t now = time(NULL);
if ((last_recv != 0 && now - last_recv > 5) ||
(now - last_uncond_wakeup > 60))
{
wake(thread3);
last_uncond_wakeup = now;
last_recv = 0;
}
}

Resources