I am learning about the FreeRTOS and would like to understand more about the queues and semaphores and when it is better to use one or another.
For example, I have multiple tasks trying to talk to my I2C gpio expander (toggling pins ON and OFF).
For this to work properly, I have decided to implement a semaphore. When the I2C communication is already in progress, the semaphore is taken and given only when the I2C communication is fully complete. The example function :
void write_led1_state(bool state,bool current_state){
if(state != current_state){
if(xSemaphoreTake(i2c_mutex,1000)==pdTRUE){
i2c_expander(led1,state);
xSemaphoreGive(i2c_mutex);
}
else{
printf("semaphore is busy write FAN state \n");
}
}
}
void write_led2_state(bool state,bool current_state){
if(state != current_state){
if(xSemaphoreTake(i2c_mutex,1000)==pdTRUE){
i2c_expander(led2,state);
xSemaphoreGive(i2c_mutex);
}
else{
printf("semaphore is busy write FAN state \n");
}
}
}
Lets imagine a scenario where I have 2 freertos tasks running. One task is trying to write_led1_state and the other is trying to write_led2_state at the same time. The potential issue would be that one of those functions will fail due to the semaphore being taken by another task. How to ensure that once the function is called, even though the semaphore is not free at this particular moment but the led will still be set once the semaphore frees.?
For this particular example, is it better to implement a queue instead? For example, write_led1_state and write_led2_state would both write a value to the queue. For example value = 1 would correspond that led1 needs to be turned ON and value = 2 would mean that the led2 needs turning ON. This way, both functions can be executed at once and the queue handler task will have to ensure that the required leds are turned ON based on the value received.
These are 2 different approaches I can think of. Could someone suggest me which one is more appropriate?
UPDATE
Method 2 (Using queues)
From what I understand, the solution using the queues could look something like(Keep in mind that this is just a pseudo code)
// the below task will be created when we initialise the i2c interface. This task will be running in the background and continuously waiting for the messages
void I2C_queue_handler(void *parameters) {
uint8_t instruction;
while (1) {
if (xQueueReceive(i2c_queue, (void *)&instruction, 0) == pdTRUE) {
// something has been detected on queue. Determine what we need to do based on instruction value
if(instruction == 0 ){
i2c_expander(led1,1); // this toggles led2
}
else if(instruction == 1){
i2c_expander(led2,1) // this toggles led1
}
}
}
}
And then my 2 functions from above can be modified instead of using a semaphore use a queue instead:
void write_led1_state(bool state,bool current_state){
if(state != current_state){
uint8_t instruction;
instruction = 0;
xQueueSend(i2c_queue, (void *)&instruction , 10);
}
}
void write_led2_state(bool state,bool current_state){
if(state != current_state){
uint8_t instruction;
instruction = 1;
xQueueSend(i2c_queue, (void *)&instruction , 10);
}
}
Is my queue implementation above correct? #Some programmer dude mentioned that I will need to have some sort of synchronization but I am not sure why is that required since my i2c_queue_handler will ensure that even when multiple tasks try to toggle the leds, onle one will happen at a time . If both tasks try to toggle an led, the first one to be sent to the queue will be written while the other will be saved to the queue and wait its turn. Is my understanding correct?
Related
I want to have multiple threads execute sequentially.
Because I need to do some work on CPU cache. So I need two threads to run on different CPU cores. Two physical CPU cores share the Last level cache. My job is to operate Last level cache
This can be achieved with pthread_attr_setaffinity_np.
This can be achieved by locking.
But when implementing it, I found it easy to have two threads execute sequentially.
void trasmitter_func(void *addr){
while(1){
pthread_mutex_lock(&lock1);
if (sequence == 0) {
pthread_cond_wait(&cond1, &lock1);
}
if(sequence == -1){
pthread_cond_signal(&cond1);
pthread_mutex_unlock(&lock1);
break;
}
printf("trasmitter start\n");
sequence = 0;
pthread_cond_signal(&cond1);
pthread_mutex_unlock(&lock1);
}
}
void receiver_func(void *addr) {
for(int i=0;i<SAMPLE_NUMBER;i++){
pthread_mutex_lock(&lock1);
if (sequence == 1) {
pthread_cond_wait(&cond1, &lock1);
}
// printf("receiver start\n");
sequence = 1;
pthread_cond_signal(&cond1);
pthread_mutex_unlock(&lock1);
}
sequence = -1;
}
But when implementing three threads, I need to use two locks and two pthread_cond_signal. And the controls are complicated. I'm curious, is there some simple implementation here that would allow three or more than three threads to execute sequentially?
I can simply pass some tags and functions to ensure their execution.
execute_func_sequence(0, func_1);
execute_func_sequence(1, func_2);
execute_func_sequence(2, func_3);
execute_func_sequence(3, func_4);
Then fun_1 to fun_4 will be executed sequentially.
This seems to be difficult to implement using locks and signals.
What I want to emphasize is: each is not executed once.
receiver start
trasmitter start
receiver start
trasmitter start
receiver start
trasmitter start
receiver start
trasmitter start
receiver start
trasmitter start
receiver start
trasmitter start
receiver start
trasmitter start
receiver start
trasmitter start
receiver start
trasmitter start
The execution result of the above sample code is as follows. I need to execute two threads non-stop sequentially.
I want to run multiple threads in this form.
func_1(){
for(int i=0;i <SAMPLE_NUMBER; i++){
do work1;
// block and enter func_2;
}
end = true;
}
func_1(){
while(1){
do work2;
if(end)
break
// block and enter func_3;
}
}
func_1(){
while(1){
do work3;
if(end)
break
// block and enter func_1;
}
}
Finally got this working.
work1,
work2,
work3,
work1,
work2,
work3,
work1,
work2,
work3,
...
work1,
work2,
work3,
A possible way of implementing what you want is:
void func_1() {
// do work...
sem_post(&thread2_sem);
}
void func_2() {
sem_wait(&thread2_sem);
// do work...
sem_post(&thread3_sem);
}
void func_3() {
sem_wait(&thread3_sem);
// do work...
sem_post(&thread4_sem);
}
void func_4() {
sem_wait(&thread4_sem);
// do work...
}
int main() {
sem_init(&thread2_sem, 0, 0);
sem_init(&thread3_sem, 0, 0);
sem_init(&thread4_se,, 0, 0);
NEW_THRAD(func_1)
NEW_THRAD(func_2)
NEW_THRAD(func_3)
NEW_THRAD(func_4)
}
As you can see, every thread is locked in some semaphore, and the previous thread is responsible for unlocking it.
The program starts with all the semaphores initialized to zero and, on each step, one semaphore gets incremented in order to activate the next thread, which will immediately decrement it.
All semaphores must go back to zero in case there's "another batch". i.e.: the threads are ready to process another order (whatever that means).
Note the example functions don't have a loop, y assume you would want something like:
void func_3() {
while (g_running) {
sem_wait(&thread3_sem);
// do work...
sem_post(&thread4_sem);
}
}
In this case, you may need something similar to wake up func_1.
I am currently trying to develop a proxy program that takes data from a SPI bus to tcp and vice versa. I would like to know if the method i intend to do is a good/intended way of utilising freertos library. The program is running as a SPI master with GPIO pin trigger from slave if slave wants to send data over as SPI can only be initiated from master.
char buffer1 [128];
char buffer2[128];
static QueueHandle_t rdySem1 //semaphore
static QueueHandle_t rdySem2 //semaphore
volatile atomic int GPIO_interrupt_pin;
void SPI_task(void* arg)
{
while(1)
{
if (GPIO_interrupt_pin)
{
//TODO:read data from SPI bus and place in buffer1
xSemaphoreGive(rdySem1);
GPIO_interrupt_pin = 0;
}
xSemaphoreTake(rdySem2, portMAX_DELAY);
//TODO:send data from buffer2[] to SPI bus
}
}
void tcp_task(void* arg)
{
while(1)
{
int len;
char rx_buffer[128];
len = recv(sock, rx_buffer, sizeof(rx_buffer) - 1, 0);
if (len>0)
{
//TODO:process data from TCP socket and place in buffer2
xSemaphoreGive(rdySem2);
}
xSemaphoreTake(rdySem1, portMAX_DELAY);
//TODO:send data from buffer1[] to TCP
}
}
//only run when GPIO pin interrupt triggers
static void isr_handler(void* arg)
{
GPIO_interrupt_pin = 1;
}
Also, i am not very familiar with how freeRTOS work but i believe xSemaphoreTake is a blocking call and it would not work in this context unless i use a non-blocking call version if xSemaphoreTake(). Any kind soul that can point me in the right direction? Much appreciate.
Your current pattern has a few problems but the fundamental idea of using semaphores can solve part of this problem. However, I think it's worth looking at restructuring your code such that each thread only waits on it's respective receive and does the complementary transmit upon reception instead of trying to pass it off to the other thread. Trying to pass, for example, both TCP-recv waiting and SPI-packet-to-TCP-send waiting, which unless you are guaranteed that first you get data over TCP to send to SPI and then you get data back, doesn't really work very well; truly asynchronous communication involves being ready to wake on either event (ie, tcp_task can't be waiting on recv when a SPI packet comes in or it may never TCP send that SPI packet until something is TCP recieved).
Instead, let the tasks only wait on their respective receiving functions and send the data to the other side immediately. If there are mutual exclusion concerns, use a mutex to guard the actual transactions. Also note that even though it's atomic, there is a risk without using test and set that GPIO_interrupt_pin might be set to 0 incorrectly if an interrupt comes between the test and the clearing of the variable. Fortunately, FreeRTOS provides nicer mechanisms in the form of task notifications to do the same thing (and the API I am using here is very much like a semaphore).
void SPI_task(void *arg)
{
while (1) {
// Wait on SPI data ready pin
ulTaskNotifyTake(0, portMAX_DELAY);
// There is spi data, can grab a mutex to avoid conflicting SPI transactions
xSemaphoreTake(spi_mutex, portMAX_DELAY);
char data[128];
spi_recv(data, sizeof(data)); //whatever the spi function is, maybe get length out of it
xSemaphoreGive(spi_mutex);
// Send the data from this thread, no need to have the other thread handle (probably)
send(sock, data, sizeof(data));
}
}
void tcp_task(void *arg)
{
while (1) {
char data[128];
int len = recv(sock, data, sizeof(data));
if (len > 0) {
// Just grab the spi mutex and do the transfer here
xSemaphoreTake(spi_mutex, portMAX_DELAY);
spi_send(data, len);
xSemaphoreGive(spi_mutex);
}
}
}
static void isr_handler(void *arg)
{
vTaskNotifyGiveFromISR(SPI_task_handle, NULL);
}
The above is a simplified example, and there's a bit more depth to go into for task notifications which you can read about here:
https://www.freertos.org/RTOS-task-notifications.html
I'm trying to build a system with multiple tasks on a CC3200 wifi (TI) lauchpad board with freeRTOS. I created three tasks in my main:
// Create task1
osi_TaskCreate( task1, ( signed portCHAR * ) "task1",
OSI_STACK_SIZE, NULL, 1, NULL );
//Two more tasks
All three tasks have the same priority (1) and therefore I expect that all three tasks will get the same amount of processor time.
Each task is only responsible for printing its name on the uart port:
void task1( void *pvParameters )
{
while(1)
{
Report("task1");
}
}
Unfortunately, I only see task 1 printing its name all the time. What should I do to fix this?
As far as my memory of FreeRTOS goes if you do create all of your threads with equal priority then you only get the equal sharing you'd like if you either don't define USE_TIME_SLICING or define it and set it to '1'.
When it comes to multiple threads competing for access to a hardware resource (or shared memory resource) then you always want to control access to it somehow. In this case the simplest (though not fastest) option is to use a mutex, FreeRTOS also has binary semaphores which will accomplish the same thing and could be slightly faster. Generally though a mutex and binary semaphore are interchangeable. For the details of the two i'd go and read the FreeRTOS docs on them and it should clear things up.
if you forgive the pseudo code, you want each thread to be doing something along the lines of the following
createMutex(UART_Lock)
void task1
{
while(1)
{
if(GetLockOnMutex(UART_Lock))
{
PrintToUART();
ReleaseMutex();
}
}
}
void task2
{
while(1)
{
if(GetLockOnMutex(UART_Lock))
{
PrintToUART();
ReleaseMutex();
}
}
}
void task3
{
while(1)
{
if(GetLockOnMutex(UART_Lock))
{
PrintToUART();
ReleaseMutex();
}
}
}
So when each thread is brought into context it will try and get a lock on the mutex which is being used to limit access to the UART. If it succeeds then it will send something and only when the printing function returns (which could be over multiple time slices) will it release the lock on the UART for another thread to try and get. If a thread can't get a lock then it just tries again until it's time slice is up. You could have a thread that fails to get a lock put itself back to sleep until it's next brought into context but that's only really important if your CPU is quite busy and your having to think about if your tasks are actually schedule-able.
Basically, If you don't control access to the UART and there is no guarantee that during a given time slice for a thread it completes it's access to the UART then the scheduler could pre-empt the unfinished thread and others could attempt to use the UART.
It would be logical to assume the UARTs send buffer might sort it out in your case but you really don't want to rely on it as it's only so big and there's nothing to stop one thread filling it up completely.
Thnx!
I implemented this as follows:
void vTestTask1( void *pvParameters )
{
while(1) {
if(xSemaphoreTake(uart_lock, 1000)) {
// Use Guarded Resource
Report("1");
// Give Semaphore back:
xSemaphoreGive(uart_lock);
}
vTaskDelay(1000);
}
}
void vTestTask2( void *pvParameters )
{
while(1) {
if(xSemaphoreTake(uart_lock, 1000)) {
// Use Guarded Resource
Report("2");
// Give Semaphore back:
xSemaphoreGive(uart_lock);
}
vTaskDelay(1000);
}
}
It worked perfectly as it is printing 121212 etc via the uart.
I have the following:
f1()
{
while(1)
{
call f(2) if hardware interrupt pin goes high
}
}
f2()
{
if( th() not started )
{
start thread th()
}
else
{
return thread th() status
}
}
th()
{
time-consuming operation
}
At the moment, I use the following to initiate a struct in f2():
static struct SharedData shared;
if( shared == NULL)
{
initialize shared
}
Then I pass a pointer to shared to the thread. The thread then updates shared periodically. f2() will then know if th() has been started based on elements of shared and it will check the status of th() by reading from shared.
Let's assume one of the elements of shared is a mutex to provide thread safety. Is this a good solution? Is there a more elegant way of doing this? I have tested the code and it works. I just need some expert advice here.
Thanks,
Assuming that f2() uses the same mutex in the shared structure to lock before reading the data that the thread th uses to modify the data, I don't see any issues.
If you have more than one thread calling f2(), you may want to use a read-write lock for reading and writing of the thread status of th. The mutex could still be used to serialize the thread creation check. You could also use a pthread_rwlock_wrlock() to serialize th creation, but the code is arguably less clear.
Using a mutex to serialize th creation in f2():
pthread_rwlock_rdlock(&shared.rwlock);
result = shared.th_status;
if (! shared.th_created) {
pthread_mutex_lock(&shared.mutex);
if (! shared.th_created) {
pthread_create(...);
shrared.th_created = 1;
}
pthread_mutex_unlock(&shared_mutex);
}
pthread_rwlock_unlock(&shared.rwlock);
return result;
Using the read-write lock to serialize th creation in f2():
pthread_rwlock_rdlock(&shared.rwlock);
result = shared.th_status;
if (! shared.th_created) {
pthread_rwlock_unlock(&shared.rwlock);
pthread_rwlock_wrlock(&shared.rwlock);
if (! shared.th_created) {
pthread_create(...);
shrared.th_created = 1;
}
}
pthread_rwlock_unlock(&shared.rwlock);
return result;
So, I am trying to implement a concurrent queue in C. I have split the methods into "read methods" and "write methods". So, when accessing the write methods, like push() and pop(), I acquire a writer lock. And the same for the read methods. Also, we can have several readers but only one writer.
In order to get this to work in code, I have a mutex lock for the entire queue. And two condition locks - one for the writer and the other for the reader. I also have two integers keeping track of the number of readers and writers currently using the queue.
So my main question is - how to implement several readers accessing the read methods at the same time?
At the moment this is my general read method code: (In psuedo code - not C. I am actually using pthreads).
mutex.lock();
while (nwriter > 0) {
wait(&reader);
mutex.unlock();
}
nreader++;
//Critical code
nreader--;
if (nreader == 0) {
signal(&writer)
}
mutex.unlock
So, imagine we have a reader which holds the mutex. Now any other reader which comes along, and tries to get the mutex, would not be able to. Wouldn't it block? Then how are many readers accessing the read methods at the same time?
Is my reasoning correct? If yes, how to solve the problem?
If this is not for an exercise, use read-write lock from pthreads (pthread_rwlock_* functions).
Also note that protecting individual calls with a lock stil might not provide necessary correctness guarantees. For example, a typical code for popping an element from STL queue is
if( !queue.empty() ) {
data = queue.top();
queue.pop();
}
And this will fail in concurrent code even if locks are used inside the queue methods, because conceptually this code must be an atomic transaction, but the implementation does not provide such guarantees. A thread may pop a different element than it read by top(), or attempt to pop from empty queue, etc.
Please find the following read\write functions.
In my functions, I used canRead and canWrite mutexes and nReads for number of readers:
Write function:
lock(canWrite) // Wait if mutex if not free
// Write
unlock(canWrite)
Read function:
lock(canRead) // This mutex protect the nReaders
nReaders++ // Init value should be 0 (no readers)
if (nReaders == 1) // No other readers
{
lock(canWrite) // No writers can enter critical section
}
unlock(canRead)
// Read
lock(canRead)
nReaders--;
if (nReaders == 0) // No more readers
{
unlock(canWrite) // Writer can enter critical secion
}
unlock(canRead)
A classic solution is multiple-readers, single-writer.
A data structure begins with no readers and no writers.
You permit any number of concurrent readers.
When a writer comes along, you block him till all current readers complete; then you let him go (any new readers and writers which come along which the writer is blocked queue up behind him, in order).
You may try this library it is built in c native, lock free, suitable for cross-platform lfqueue,
For Example:-
int* int_data;
lfqueue_t my_queue;
if (lfqueue_init(&my_queue) == -1)
return -1;
/** Wrap This scope in other threads **/
int_data = (int*) malloc(sizeof(int));
assert(int_data != NULL);
*int_data = i++;
/*Enqueue*/
while (lfqueue_enq(&my_queue, int_data) == -1) {
printf("ENQ Full ?\n");
}
/** Wrap This scope in other threads **/
/*Dequeue*/
while ( (int_data = lfqueue_deq(&my_queue)) == NULL) {
printf("DEQ EMPTY ..\n");
}
// printf("%d\n", *(int*) int_data );
free(int_data);
/** End **/
lfqueue_destroy(&my_queue);