What is the behavioral difference between vTaskDelay and _delay_ms? - c

1. Introduction
I cannot seem to find information or a detailed explanation about the behavioral differences between the following functions in a FreeRTOS task:
vTaskDelay
_delay_ms
2. Code
Suppose you have the following codes:
IdleHook + task creation
Long value = 0;
void vApplicationIdleHook( void ) {
while(1)
{
// empty
}
}
int main(void)
{
xTaskCreate(TaskIncrement, (const portCHAR *)"up" , 256, NULL, 2, NULL );
xTaskCreate(TaskDecrement, (const portCHAR *)"down" , 256, NULL, 1, NULL );
vTaskStartScheduler();
}
Tasks with vTaskDelay
static void TaskDecrement(void *param)
{
while(1)
{
for(unsigned long i=0; i < 123; i++) {
//semaphore take
value--;
//semaphore give
}
vTaskDelay(100);
}
}
static void TaskIncrement(void *param)
{
while(1)
{
for(unsigned long i=0; i < 123; i++) {
//semaphore take
value++;
//semaphore give
}
vTaskDelay(100);
}
}
Tasks with _delay_ms
static void TaskDecrement(void *param)
{
while(1)
{
for(unsigned long i=0; i < 123; i++) {
//semaphore take
value--;
//semaphore give
}
_delay_ms(100);
}
}
static void TaskIncrement(void *param)
{
while(1)
{
for(unsigned long i=0; i < 123; i++) {
//semaphore take
value++;
//semaphore give
}
_delay_ms(100);
}
}
3. Question
What happens with the flow of the program when the tasks are provided with a vTaskDelay versus _delay_ms?
Note: the two given example tasks have different priorities.

From https://www.freertos.org/Documentation/FreeRTOS_Reference_Manual_V10.0.0.pdf:
Places the task that calls vTaskDelay() into the Blocked state for a fixed number of tick interrupts. Specifying a delay period of zero ticks will not result in the calling task being placed into the Blocked state, but will result in the calling task yielding to any Ready state tasks that share its priority. Calling vTaskDelay(0) is equivalent to calling taskYIELD().
I think you get the idea already, but if you have multiple tasks created, then vTaskDelay() will put the running task into the "Blocked" state for the specified number of tick interrupts (not milliseconds!) and allow the task with the next highest priority to run until it yields control (or gets preempted, depending on your FreeRTOS configuration).
I don't think _delay_ms() is part of the FreeRTOS library. Are you sure it's not a platform specific function? My guess is, if the highest priority task calls _delay_ms(), then it will result in a busy wait. Otherwise, a task with a higher priority may preempt the task that calls _delay_ms() as it is delaying (that is, _delay_ms() will not yield control immediately).
Perhaps a better summary of the above: in a multitasking application, _delay_ms() is not deterministic.

Related

is there a way to create a thread that will check other thread in C, dinning philosophers implementation

I have a question regarding threads in C, I know that to create a thread the function pthread_create is needed and I'm currently working on the dining philosopher problem and in this implementation of that problem I have to look if a philosopher has died of starvation.
I tested my programs and it works well, but to look if a philosopher has died I create another thread that'll always run and check in it if a philosoper has died.
a philosopher will die of starvation if he has not eat during a certain amount of time since his last meal.
Defining the general structure of the program and headers.
#include <stdio.h>
#include <stdlib.h>
#include <unistd.h>
#include <pthread.h>
#include <string.h>
#include <sys/time.h>
struct t_phil;
typedef struct t_info t_info;
typedef struct t_phil t_phil;
typedef struct t_info
{
int num_of_phil;
t_phil *philo;
int min_dinner;
pthread_mutex_t *mutex;
int plate_eaten;
int num_of_dead_phil;
int time_to_die;
int time_to_sleep;
int time_to_eat;
pthread_mutex_t t_mut;
} t_info;
typedef struct t_phil
{
int number;
int has_eaten_all;
int finished_meal;
int is_dead;
t_info *data;
pthread_mutex_t *right;
pthread_mutex_t *left;
struct timeval last_dinner;
pthread_t thread;
} t_phil;
int count = 0;
Here is the function, that'll simulate the dinner, this one works as intended but I am open to possible error or improvement.
void *routine(void *args)
{
t_phil *philo = (t_phil *)(args);
int i = philo->number;
if ((philo->number % 2))
sleep(1);
gettimeofday(&philo->last_dinner, 0);
while (!philo->is_dead)
{
pthread_mutex_lock(philo->left);
printf("Philosopher : %i has take left fork\n", philo->number + 1);
pthread_mutex_lock(philo->right);
printf("Philosopher : %i has take right fork\n", philo->number + 1);
gettimeofday(&philo->last_dinner, 0);
printf("Philosopher :%i is eating in at %li\n", philo->number + 1, philo->last_dinner.tv_sec * 1000);
pthread_mutex_lock(&philo->data->t_mut);
// if (philo->data->num_of_dead_phil && !philo->data->min_dinner)
// break;
if (philo->is_dead)
break;
gettimeofday(&philo->last_dinner, NULL);
philo->finished_meal++;
if (philo->finished_meal == philo->data->min_dinner)
{
philo->data->plate_eaten++;
philo->has_eaten_all = 1;
}
sleep(philo->data->time_to_eat);
pthread_mutex_unlock(&philo->data->t_mut);
pthread_mutex_unlock(philo->left);
pthread_mutex_unlock(philo->right);
if (philo->has_eaten_all)
break;
printf("Philosopher : %i is now sleeping at %li\n", philo->number + 1, philo->last_dinner.tv_sec * 1000);
sleep(philo->data->time_to_sleep);
printf("Philosopher : %i is now thinking at %li\n", philo->number + 1, philo->last_dinner.tv_sec * 1000);
}
return (NULL);
}
This one function is the one not working as intended, and I don't know why is this happening right now, as my if statement seems to have the right condition but I am never entering in the if statement meaning that condition is never met while it should be.
I tested a lot of value and the same result happen each time
void *watchers_phil(void *args)
{
t_info *data = (t_info *)args;
t_phil *phil = data->philo;
int i = 0;
struct timeval now;
while (1)
{
if (data->plate_eaten == data->num_of_phil)
break;
while (i < data->num_of_phil)
{
if ((phil[i].last_dinner.tv_sec) >= ((phil[i].last_dinner.tv_sec) + (long int)data->time_to_die))
{
gettimeofday(&now, NULL);
printf("Unfortunately Philosopher : %i, is dead because of starvation at %li....", phil[i].number, (now.tv_sec * 1000));
phil[i].is_dead = 1;
}
i++;
}
i = 0;
}
return (NULL);
}
int main(int argc, char *argv[])
{
t_info data;
pthread_t watchers;
memset(&data, 0, sizeof(t_info));
data.num_of_phil = atoi(argv[1]);
data.min_dinner = atoi(argv[2]);
data.time_to_eat = atoi(argv[3]);
data.time_to_die = atoi(argv[4]);
data.time_to_sleep = atoi(argv[5]);
t_phil *philo = malloc(sizeof(t_phil) * data.num_of_phil);
if (!philo)
return (1);
pthread_mutex_t *mutex = malloc(sizeof(pthread_mutex_t) * data.num_of_phil);
data.mutex = mutex;
if (!mutex)
{
free(philo);
return (1);
}
int i = 0;
while (i < data.num_of_phil)
{
pthread_mutex_init(&data.mutex[i], NULL);
i++;
}
printf("Number : %i\n", data.num_of_phil);
pthread_mutex_init(&data.t_mut, NULL);
i = 0;
while (i < data.num_of_phil)
{
philo[i].number = i;
philo[i].has_eaten_all = 0;
philo[i].data = &data;
philo[i].is_dead = 0;
philo[i].right = &data.mutex[i];
if (i == (data.num_of_phil - 1))
philo[i].left = &data.mutex[0];
else
philo[i].left = &data.mutex[i + 1];
i++;
}
data.philo = philo;
i = 0;
while (i < data.num_of_phil)
{
pthread_create(&data.philo[i].thread, NULL, routine, &data.philo[i]);
i++;
}
pthread_create(&watchers, NULL, watchers_phil, &data);
i = 0;
while (i < data.num_of_phil)
{
pthread_join(data.philo[i].thread, NULL);
i++;
}
pthread_join(watchers, NULL);
printf("Dinner eaten : %i\n", data.plate_eaten);
i = 0;
while (i < data.num_of_phil)
{
pthread_mutex_destroy(&data.mutex[i]);
i++;
}
pthread_mutex_destroy(&data.t_mut);
}
I recommend the following approach:
Have an array containing a timestamp for each philosopher thread.
Each philosopher thread updates its respective timestamp either on beginning or ending eating. To determine its own death it can use this timestamp as well. You could store the eating time or the time when starvation occurs, I personally would rather do the latter as it requires calculating this starvation time just once (and you don't need the eating time anywhere else) while the former would need to do it on every test for starvation.
The monitoring thread iterates over all these timestamps doing the following:
Determine if a thread has starved (current time – just get it once in front of the loop! – being after timestamp stored, if you follow my recommendation above). Flag the thread as starved by some suitable way so that you can ignore it next time.
From non-starved threads remember the minimum value.
After iteration sleep as long until this minimum time will be reached (maybe waking up minimally earlier for better precision). If a philosopher updates the timestamp corresponding to this minimum – never mind, the monitor will wake up in vain, just doing nothing, but that won't hurt apart from consuming a bit of CPU time.
Updating the timestamps: You need to be aware that – unless you can guarantee atomic timestamp update – you need to protect them against race conditions between philosopher threads and monitoring thread.
I see two options for:
One single mutex for the entire array. Easy to implement, but either of the involved threads might block another one (two philosophers, if both trying to update at the same time, a philosopher the monitor or the monitor one or more philosophers). If one thread is preempted right while holding this mutex the blocked one(s) need(s) to wait until this thread is scheduled again.
One mutex for each timestamp. The advantage is that blocking only can occur between the monitor and one single philosopher while all other philosophers can go on as usual. The implementation is more complex, though, and the monitor thread will run its tests a bit slower (you won't notice...) due to having to lock and unlock mutexes again and again (system calls involved, these are rather expensive).
Philosophers should hold the mutex over the entire time between reading the timestamp for checking own starvation until having written the updated starvation time. This assures the philosopher cannot starve right while eating, though in between you should do as little work as possible, especially if you opt for one single mutex (if you cannot avoid, then rather opt for multiple ones).

Noticing if all threads are at pthread_cond_wait

I'm currently playing around with the POSIX library and trying out conditional variables.
At the moment I'm using a queue to scheduele tasks, if there is a task one thread uses pthread_cond_signal to wake up a thread that is waiting at pthread_cond_wait.
However there might be a point where no new tasks are created, so every thread is waiting at pthread_cond_wait and the programm is stuck there.
Is there anyway for me noticing if all my threads are waiting at pthread_cond_wait? I've tried using a counter and a leave variable but I did not get it working.
My code looks simliar to this code here: https://code-vault.net/lesson/j62v2novkv:1609958966824
,except each thread is adding new tasks (in executeTask) instead of only the main method adding them.
EDIT:
Here is my version of the code from the website above (which is loading for me:/)
#include <stdio.h>
#include <string.h>
#include <pthread.h>
#include <stdlib.h>
#include <unistd.h>
#include <time.h>
#define THREAD_NUM 4
typedef struct Task
{
int a, b;
} Task;
Task taskQueue[256];
int taskCount = 0;
// THIS VARIABLE IS ADDED
int moreTasks = 10;
pthread_mutex_t mutexQueue;
pthread_cond_t condQueue;
void executeTask(Task *task)
{
usleep(50000);
int result = task->a + task->b;
printf("The sum of %d and %d is %d\n", task->a, task->b, result);
// THIS PART IS ADDED
if (moreTasks > 0)
{
pthread_mutex_lock(&mutexQueue);
moreTasks--;
pthread_mutex_unlock(&mutexQueue);
Task t = {
.a = rand() % 100,
.b = rand() % 100};
submitTask(t);
}
}
void submitTask(Task task)
{
pthread_mutex_lock(&mutexQueue);
taskQueue[taskCount] = task;
taskCount++;
pthread_mutex_unlock(&mutexQueue);
pthread_cond_signal(&condQueue);
}
void *startThread(void *args)
{
while (1)
{
Task task;
pthread_mutex_lock(&mutexQueue);
while (taskCount == 0)
{
pthread_cond_wait(&condQueue, &mutexQueue);
}
task = taskQueue[0];
int i;
for (i = 0; i < taskCount - 1; i++)
{
taskQueue[i] = taskQueue[i + 1];
}
taskCount--;
pthread_mutex_unlock(&mutexQueue);
executeTask(&task);
}
}
int main(int argc, char *argv[])
{
pthread_t th[THREAD_NUM];
pthread_mutex_init(&mutexQueue, NULL);
pthread_cond_init(&condQueue, NULL);
int i;
for (i = 0; i < THREAD_NUM; i++)
{
if (pthread_create(&th[i], NULL, &startThread, NULL) != 0)
{
perror("Failed to create the thread");
}
}
srand(time(NULL));
for (i = 0; i < 100; i++)
{
Task t = {
.a = rand() % 100,
.b = rand() % 100};
submitTask(t);
}
for (i = 0; i < THREAD_NUM; i++)
{
if (pthread_join(th[i], NULL) != 0)
{
perror("Failed to join the thread");
}
}
pthread_mutex_destroy(&mutexQueue);
pthread_cond_destroy(&condQueue);
return 0;
}
My Probleme now is that after a while the taskCount is always zero and tehrefore all my threads are at pthread_cond_wait(&condQueue, &mutexQueue); in the startThread methods.
Is ther any way of me noticing if all threads are at this place? As statet above if already tried using a counter for the threads that are currently waiting (and also a version where I counted the threads that are working) but I could not figure out how to stop all the threads once every task is done.
If you want to know how many threads are waiting on a condition var, just increase a global counter before you go into wait mode and decrease it on wake up. With that you'll know how many threads are currently waiting.
e.g.
//global scope
int num_waiting_threads = 0;
//in function startThread
while (taskCount == 0)
{
//if num_waiting_threads == THREAD_NUM, all threads are waiting
++num_waiting_threads;
pthread_cond_wait(&condQueue, &mutexQueue);
--num_waiting_threads;
}
Since you have a fixed number of tasks, after processing every task, every thread will be sooner or later just waiting around for newly submitted tasks. Your execute function will only add in total 10 new tasks (moreTasks is initialized with 10 and decreased continuously and if that counter hits zero, no more tasks will be submitted).
Therefore, you could/should use pthread_cond_timedwait and wake up by yourself to check the num_waiting_threads state. If all threads are waiting, break out of the loop and exit the thread.

How do you setup callback in reader/writer multi-threading program

I have written a reader/writer lock implementation and what I plan to do is to setup callback for each thread. Let's say we have 3 reader threads and 3 of them have read a value X. Now a writer thread updates the value X to X+100. This should send a callback to all the 3 reader threads that the value has changed. How do we implement such a callback in the multi-threading environment in C programming language?
#include<stdio.h>
#include<stdlib.h>
#include<pthread.h>
#include<semaphore.h>
#include<stdint.h>
sem_t dbAccess;
sem_t readCountAccess;
int readCount=0;
void *Reader(void *arg);
void *Writer(void *arg);
int main(int argc, char* argv[])
{
int i=0, num_of_readers = 0, num_of_writers = 0;
//inititalizing semaphores
sem_init(&readCountAccess,0,1);
sem_init(&dbAccess,0,1);
pthread_t readers_tid[100], writer_tid[100];
num_of_readers = atoi(argv[1]);
num_of_writers = atoi(argv[2]);
for(i = 0; i < num_of_readers; i++)
{
pthread_create(&readers_tid[i], NULL , Reader, (void *) (intptr_t) i);
}
for(i = 0;i < num_of_writers; i++)
{
pthread_create(&writer_tid[i], NULL, Writer, (void *) (intptr_t) i);
}
for(i = 0; i < num_of_writers; i++)
{
pthread_join(writer_tid[i],NULL);
}
for(i = 0; i < num_of_readers; i++)
{
pthread_join(readers_tid[i], NULL);
}
sem_destroy(&dbAccess);
sem_destroy(&readCountAccess);
return 0;
}
void * Writer(void *arg)
{
sleep(1);
int temp=(intptr_t) arg;
printf("Writer %d is trying to enter into database for modifying the data\n",temp);
sem_wait(&dbAccess);
printf("Writer %d is writting into the database\n",temp);
printf("Writer %d is leaving the database\n");
sem_post(&dbAccess);
}
void *Reader(void *arg)
{
sleep(1);
int temp=(intptr_t) arg;
printf("Reader %d is trying to enter into the Database for reading the data\n",temp);
sem_wait(&readCountAccess);
readCount++;
if(readCount==1)
{
sem_wait(&dbAccess);
printf("Reader %d is reading the database\n",temp);
}
sem_post(&readCountAccess);
sem_wait(&readCountAccess);
readCount--;
if(readCount==0)
{
printf("Reader %d is leaving the database\n",temp);
sem_post(&dbAccess);
}
sem_post(&readCountAccess);
}
I can see 2 possible approaches:
1. Let the readers call a function when they'd like to be notified. The writer would have to mark something per-reader thread to say they have to call your CB on the shared variable. This would be my preferred approach, as I'd design readers to know when they need these updates.
2. Signal the readers to have them execute some code from a signal handler. You can use SIG_USR1/2 for that, and add some additional marking somewhere to know what changed and what to call. This is less clean, obviously, but I imagine would be more handy when these threads are created by 3rd party and you cannot control what they're doing.
HTH
EDIT
I forgot another important thing - if you're only updating a POD variable (int, etc) you may use the method 1 with a seqlock and a local copy per reader thread. This way every reader would know did the value change.
Another consideration is do you need readers to know only the last value or do they need to respond to every single change? if you need to respond to every change you can use a lock free queue, one per reader, and have the writer push into every one of them.

Measuring efficiency of Mutex and Busy waiting

The program is to create several threads where each thread increments a shared variable by 10000 using a for loop that increments it by 1 in every iteration. Both mutex lock and spin lock (busy waiting) versions are required. According to what I've learned, mutex version should work faster than spin lock. But what I implemented gave me an opposite answer...
This is the inplementation of each thread in the mutex version:
void *incr(void *tid)
{
int i;
for(i = 0; i < 10000; i++)
{
pthread_mutex_lock(&the_mutex); //Grab the lock
sharedVar++; //Increment the shared variable
pthread_mutex_unlock(&the_mutex); //Release the lock
}
pthread_exit(0);
}
And this is the implementation in the spin lock version:
void *incr(void *tid)
{
int i;
for(i = 0; i < 10000; i++)
{
enter_region((int)tid); //Grab the lock
sharedVar++; //Increment the shared variable
leave_region((int)tid); //Release the lock
}
pthread_exit(0);
}
void enter_region(int tid)
{
interested[tid] = true; //Show this thread is interested
turn = tid; //Set flag
while(turn == tid && other_interested(tid)); //Busy waiting
}
bool other_interested(int tid) //interested[] is initialized to all false
{
int i;
for(i = 0; i < tNumber; i++)
if(i != tid)
if(interested[i] == true) //There are other threads that are interested
return true;
return false;
}
void leave_region(int tid)
{
interested[tid] = false; //Depart from critical region
}
I also iterated the process of threads creating and running for hundreds of times to make sure the execution time can be distinguished.
For example, if tNumber is 4, and I iterated the program for 1000 times, mutex will take me 2.22 sec, and spin lock will take me 1.35 sec. The difference grows as tNumber increases. Why is this happening? Is my code wrong?
The code between enter_region and leave_region is not protected.
You can prove this by making it more complicated to ensure that it will trip itself up.
Create an array of bools (check) of length 10000 set up false. Make the code between enter and leave:
if (check[sharedVar]) cout << "ERROR" << endl;
else check[sharedVar++] = true;
The "difference" in speed is that you are implementing synchronization using
interested[tid] = true; //Show this thread is interested
turn = tid; //Set flag
while(turn == tid && other_interested(tid));
which are sequential operations. Any thread can be preempted while he is doing that, and the next thread read an erroneous state.
It need to be done atomically, by implementing either a compare-and-swap or a test-and-set. These instructions are usually provided by the hardware.
For eg on x86 you have xchg, cmpxchg/cmpxchg8b, xadd
Your test could be rewritten as
while( compare_and_swap_atomic(myid,id_meaning_it_is_free) == false);
The issue is that atomicity is expensive .

Wrong implementation of Peterson's algorithm?

I was trying to learn something about parallel programming, so I tried to implement Peterson's algorithm for an easy example where one shared counter is incremented by 2 threads. I know that Peterson's isn't optimal due to busy waiting but I tried it only for study reasons.
I supposed that the critical section of this code is in the thread function add where the shared counter is incremented. So i call the enter_section function before the counter incrementation and after it I called the leave_function. Is this part wrong? Did I asses the critical section wrong? Problem is that the counter sometimes gives an unexpectable value when these 2 threads are done. It has to be a synchronization problem between threads but I just don't see it... Thanks for any help.
#include <stdio.h>
#include <stdlib.h>
#include <pthread.h>
int counter; /* global shared counter */
int flag[2] = {0, 0}; /* Variables for Peterson's algorithm */
int turn = 0;
typedef struct threadArgs
{
pthread_t thr_ID;
int num_of_repeats;
int id;
} THREADARGS;
void enter_section (int thread) {
int other = 1 - thread;
flag[thread] = 1;
turn = thread;
while ((turn == thread) && (flag[other] == 1));
return;
}
void leave_section (int thread) {
flag[thread] = 0;
return;
}
void * add (void * arg) {
int i;
THREADARGS * a = (THREADARGS *) arg;
for (i = 0; i < a->num_of_repeats; i++) {
enter_section(a->id);
counter++;
leave_section(a->id);
}
return NULL;
}
int main () {
int i = 1;
pthread_attr_t thrAttr;
THREADARGS threadargs_array[2];
pthread_attr_init (&thrAttr);
pthread_attr_setdetachstate (&thrAttr, PTHREAD_CREATE_JOINABLE);
/* creating 1st thread */
threadargs_array[0].id = 0;
threadargs_array[0].num_of_repeats = 1000000;
pthread_create(&threadargs_array[0].thr_ID, &thrAttr, add, &threadargs_array[0]);
/* creating 2nd thread */
threadargs_array[1].id = 1;
threadargs_array[1].num_of_repeats = 2000000;
pthread_create(&threadargs_array[1].thr_ID, &thrAttr, add, &threadargs_array[1]);
/* free resources for thread attributes */
pthread_attr_destroy (&thrAttr);
/* waiting for 1st thread */
pthread_join (threadargs_array[0].thr_ID, NULL);
printf("First thread is done.\n");
/* waiting for 2nd thread */
pthread_join (threadargs_array[1].thr_ID, NULL);
printf("Second thread is done.\n");
printf("Counter value is: %d \n", counter);
return (EXIT_SUCCESS);
}
You have several problems here:
the access to your variables will we subject to optimization, so you'd have to declare them volatile, at least.
Algorithms like this that access data between threads without any of the lock data structures that are provided by POSIX can only work if your primitive operations are guaranteed to be atomic, which they usually aren't on modern processors. In particular the ++ operator is not atomic.
There would be several ways around this, in particular the new C standard C11 offers atomic primitives. But if this is really meant for you as a start to learn parallel programming, I'd strongly suggest that you first look into mutexes, condition variables etc, to learn how POSIX is intended to work.

Resources