I'm trying to build a system with multiple tasks on a CC3200 wifi (TI) lauchpad board with freeRTOS. I created three tasks in my main:
// Create task1
osi_TaskCreate( task1, ( signed portCHAR * ) "task1",
OSI_STACK_SIZE, NULL, 1, NULL );
//Two more tasks
All three tasks have the same priority (1) and therefore I expect that all three tasks will get the same amount of processor time.
Each task is only responsible for printing its name on the uart port:
void task1( void *pvParameters )
{
while(1)
{
Report("task1");
}
}
Unfortunately, I only see task 1 printing its name all the time. What should I do to fix this?
As far as my memory of FreeRTOS goes if you do create all of your threads with equal priority then you only get the equal sharing you'd like if you either don't define USE_TIME_SLICING or define it and set it to '1'.
When it comes to multiple threads competing for access to a hardware resource (or shared memory resource) then you always want to control access to it somehow. In this case the simplest (though not fastest) option is to use a mutex, FreeRTOS also has binary semaphores which will accomplish the same thing and could be slightly faster. Generally though a mutex and binary semaphore are interchangeable. For the details of the two i'd go and read the FreeRTOS docs on them and it should clear things up.
if you forgive the pseudo code, you want each thread to be doing something along the lines of the following
createMutex(UART_Lock)
void task1
{
while(1)
{
if(GetLockOnMutex(UART_Lock))
{
PrintToUART();
ReleaseMutex();
}
}
}
void task2
{
while(1)
{
if(GetLockOnMutex(UART_Lock))
{
PrintToUART();
ReleaseMutex();
}
}
}
void task3
{
while(1)
{
if(GetLockOnMutex(UART_Lock))
{
PrintToUART();
ReleaseMutex();
}
}
}
So when each thread is brought into context it will try and get a lock on the mutex which is being used to limit access to the UART. If it succeeds then it will send something and only when the printing function returns (which could be over multiple time slices) will it release the lock on the UART for another thread to try and get. If a thread can't get a lock then it just tries again until it's time slice is up. You could have a thread that fails to get a lock put itself back to sleep until it's next brought into context but that's only really important if your CPU is quite busy and your having to think about if your tasks are actually schedule-able.
Basically, If you don't control access to the UART and there is no guarantee that during a given time slice for a thread it completes it's access to the UART then the scheduler could pre-empt the unfinished thread and others could attempt to use the UART.
It would be logical to assume the UARTs send buffer might sort it out in your case but you really don't want to rely on it as it's only so big and there's nothing to stop one thread filling it up completely.
Thnx!
I implemented this as follows:
void vTestTask1( void *pvParameters )
{
while(1) {
if(xSemaphoreTake(uart_lock, 1000)) {
// Use Guarded Resource
Report("1");
// Give Semaphore back:
xSemaphoreGive(uart_lock);
}
vTaskDelay(1000);
}
}
void vTestTask2( void *pvParameters )
{
while(1) {
if(xSemaphoreTake(uart_lock, 1000)) {
// Use Guarded Resource
Report("2");
// Give Semaphore back:
xSemaphoreGive(uart_lock);
}
vTaskDelay(1000);
}
}
It worked perfectly as it is printing 121212 etc via the uart.
Related
I am learning about the FreeRTOS and would like to understand more about the queues and semaphores and when it is better to use one or another.
For example, I have multiple tasks trying to talk to my I2C gpio expander (toggling pins ON and OFF).
For this to work properly, I have decided to implement a semaphore. When the I2C communication is already in progress, the semaphore is taken and given only when the I2C communication is fully complete. The example function :
void write_led1_state(bool state,bool current_state){
if(state != current_state){
if(xSemaphoreTake(i2c_mutex,1000)==pdTRUE){
i2c_expander(led1,state);
xSemaphoreGive(i2c_mutex);
}
else{
printf("semaphore is busy write FAN state \n");
}
}
}
void write_led2_state(bool state,bool current_state){
if(state != current_state){
if(xSemaphoreTake(i2c_mutex,1000)==pdTRUE){
i2c_expander(led2,state);
xSemaphoreGive(i2c_mutex);
}
else{
printf("semaphore is busy write FAN state \n");
}
}
}
Lets imagine a scenario where I have 2 freertos tasks running. One task is trying to write_led1_state and the other is trying to write_led2_state at the same time. The potential issue would be that one of those functions will fail due to the semaphore being taken by another task. How to ensure that once the function is called, even though the semaphore is not free at this particular moment but the led will still be set once the semaphore frees.?
For this particular example, is it better to implement a queue instead? For example, write_led1_state and write_led2_state would both write a value to the queue. For example value = 1 would correspond that led1 needs to be turned ON and value = 2 would mean that the led2 needs turning ON. This way, both functions can be executed at once and the queue handler task will have to ensure that the required leds are turned ON based on the value received.
These are 2 different approaches I can think of. Could someone suggest me which one is more appropriate?
UPDATE
Method 2 (Using queues)
From what I understand, the solution using the queues could look something like(Keep in mind that this is just a pseudo code)
// the below task will be created when we initialise the i2c interface. This task will be running in the background and continuously waiting for the messages
void I2C_queue_handler(void *parameters) {
uint8_t instruction;
while (1) {
if (xQueueReceive(i2c_queue, (void *)&instruction, 0) == pdTRUE) {
// something has been detected on queue. Determine what we need to do based on instruction value
if(instruction == 0 ){
i2c_expander(led1,1); // this toggles led2
}
else if(instruction == 1){
i2c_expander(led2,1) // this toggles led1
}
}
}
}
And then my 2 functions from above can be modified instead of using a semaphore use a queue instead:
void write_led1_state(bool state,bool current_state){
if(state != current_state){
uint8_t instruction;
instruction = 0;
xQueueSend(i2c_queue, (void *)&instruction , 10);
}
}
void write_led2_state(bool state,bool current_state){
if(state != current_state){
uint8_t instruction;
instruction = 1;
xQueueSend(i2c_queue, (void *)&instruction , 10);
}
}
Is my queue implementation above correct? #Some programmer dude mentioned that I will need to have some sort of synchronization but I am not sure why is that required since my i2c_queue_handler will ensure that even when multiple tasks try to toggle the leds, onle one will happen at a time . If both tasks try to toggle an led, the first one to be sent to the queue will be written while the other will be saved to the queue and wait its turn. Is my understanding correct?
I've been working on a project lately, and i need to manage a pair of thread pools.
What the worker threads in the pools do is basically execute some kind of pop operation to each respective queue, eventually wait on a condition variable (pthread_cond_t) if there is no available value in the queue, and once they get an item, parse it and execute operations accordingly.
What i'm concerned about is the fact that i want to have no memory leaks, and to achieve that i noticed that calling a pthread_cancel on each thread when the main process is exiting is definitely a bad idea, as it leaves a lot of garbage around.
The point is, my first thought was to use a exit flag which i can set when the threads need to exit, so that they can easily free memory and call a pthread_exit...
I guess i should set this flag, then send a broadcast signal to the threads waiting on the condition variable and check the flag right after the pop operation...
Is this really the correct way to implement a good thread pool termination? I don't feel that much confident about this...
I'm writing some pseudo-code here to explain what i'm talking about
Each pool thread will run some code structured like this:
/* Worker thread (which will run on each pool thread) */
{ /* Thread initialization */ ... }
loop {
value = pop();
{ /* Mutex lock because of the shared flag */ ... }
if (flag) {{ /* Free memory and unlock mutex */ ... } pthread_exit(); }
{ /* Unlock the mutex */ ... }
{ /* Elaborate value */ ... }
}
return NULL;
And there will be some kind of pool_stopRunning() function which will look like:
/* pool_stopRunning() function code */
{ /* Acquire flag mutex */ ... }
setFlag(flag);
{ /* Unlock flag mutex */ ... }
{ /* Acquire queue mutex */ ... }
pthread_cond_broadcast(...);
{ /* Unlock queue mutex */ ... }
Thanks in advance, i just need to be sure that there isn't a fancy-er way to stop a thread pool... (or get to know a better way, by any chance)
As always, i'm sorry if there is any typo, i'm not and english speaker and it's kind of late right now >:
What you are describing will work, but I would suggest a different approach...
You already have a mechanism for assigning tasks to threads, complete with all appropriate synchronization. So instead of complicating the design with some new parallel mechanism, just define a new type of task called "STOP". If there are N threads servicing a queue and you want to terminate them, push N STOP tasks onto the queue. Then just wait for all of the threads to terminate. (This last can be done via "join", so it should not require any new mechanism, either.)
No muss, no fuss.
With respect to symmetry with setting the flag and reducing serialization, this code:
{ /* Mutex lock because of the shared flag */ ... }
if (flag) {{ /* Free memory and unlock mutex */ ... } pthread_exit(); }
{ /* Unlock the mutex */ ... }
should look like this:
{ /* Mutex lock because of the shared flag */ ... }
flagcopy = readFlag();
{ /* Unlock the mutex */ ... }
if (flagcopy) {{ /* Free memory ... } pthread_exit(); }
Having said that, you can (should?) factor the mutex code into the setFlag and readFlag methods.
There is one more thing. If the flag is only a boolean and it is only changed once before the whole thing shuts down (i.e., it's never unset after being set), then I would argue that protecting the read with a mutex is not required.
I say this because if the above assumptions are true and if the loop's duration is very short and the loop iteration frequency is high, then you would be imposing undue serialization upon the business task and potentially increasing the response time unacceptably.
My problem is that I cannot reuse cancelled pthread. Sample code:
#include <pthread.h>
pthread_t alg;
pthread_t stop_alg;
int thread_available;
void *stopAlgorithm() {
while (1) {
sleep(6);
if (thread_available == 1) {
pthread_cancel(alg);
printf("Now it's dead!\n");
thread_available = 0;
}
}
}
void *algorithm() {
while (1) {
printf("I'm here\n");
}
}
int main() {
thread_available = 0;
pthread_create(&stop_alg, NULL, stopAlgorithm, 0);
while (1) {
sleep(1);
if (thread_available == 0) {
sleep(2);
printf("Starting algorithm\n");
pthread_create(&alg, NULL, algorithm, 0);
thread_available = 1;
}
}
}
This sample should create two threads - one will be created at the program beginning and will try to cancel second as soon it starts, second should be rerunned as soon at it was cancelled and say "I'm here". But when algorithm thread cancelled once it doesn't start once again, it says "Starting algorithm" and does nothing, no "I'm here" messages any more. Could you please tell me the way to start cancelled(immediately stopped) thread once again?
UPD: So, thanks to your help I understood what is the problem. When I rerun algorithm thread it throws error 11:"The system lacked the necessary resources to create another thread, or the system-imposed limit on the total number of threads in a process PTHREAD_THREADS_MAX would be exceeded.". Actually I have 5 threads, but only one is cancelled, others stop by pthread_exit. So after algorithm stopped and program went to standby mode I checked status of all threads with pthread_join - all thread show 0(cancelled shows PTHREAD_CANCELED), as far as I can understand this means, that all threads stopped successfully. But one more try to run algorithm throws error 11 again. So I've checked memory usage. In standby mode before algorithm - 10428, during the algorithm, when all threads used - 2026m, in standby mode after algorithm stopped - 2019m. So even if threads stopped they still use memory, pthread_detach didn't help with this. Are there any other ways to clean-up after threads?
Also, sometimes on pthread_cancel my program crashes with "libgcc_s.so.1 must be installed for pthread_cancel to work"
Several points:
First, this is not safe:
int thread_available;
void *stopAlgorithm() {
while (1) {
sleep(6);
if (thread_available == 1) {
pthread_cancel(alg);
printf("Now it's dead!\n");
thread_available = 0;
}
}
}
It's not safe for at least reasons. Firstly, you've not marked thread_available as volatile. This means that the compiler can optimise stopAlgorithm to read the variable once, and never reread it. Secondly, you haven't ensured access to it is atomic, or protected it by a mutex. Either declare it:
volatile sig_atomic_t thread_available;
(or similar), or better, protect it by a mutex.
But for the general case of triggering one thread from another, you are better using a condition variable (and a mutex), using pthread_condwait or pthread_condtimedwait in the listening thread, and pthread_condbroadcast in the triggering thread.
Next, what's the point of the stopAlgorithm thread? All it does is cancel the algorithm thread after an unpredictable amount of time between 0 and 6 seconds? Why not just sent the pthread_cancel from the main thread?
Next, do you care where your algorithm is when it is cancelled? If not, just pthread_cancel it. If so (and anyway, I think it's far nicer), regularly check a flag (either atomic and volatile as above, or protected by a mutex) and pthread_exit if it's set. If your algorithm does big chunks every second or so, then check it then. If it does lots of tiny things, check it (say) every 1,000 operations so taking the mutex doesn't introduce a performance penalty.
Lastly, if you cancel a thread (or if it pthread_exits), the way you start it again is simply to call pthread_create again. It's then a new thread running the same code.
I have a hash table implementation in C where each location in the table is a linked list (to handle collisions). These linked lists are inherently thread safe and so no additional thread-safe code needs to be written at the hash table level if the table is a constant size - the hash table is thread-safe.
However, I would like the hash table to dynamically expand as values were added so as to maintain a reasonable access time. For the table to expand though, it needs additional thread-safety.
For the purposes of this question, procedures which can safely occur concurrently are 'benign' and the table resizing procedure (which cannot occur concurrently) is 'critical'. Threads currently using the list are known as 'users'.
My first solution to this was to put 'preamble' and 'postamble' code for all the critical function which locks a mutex and then waits until there are no current users proceeding. Then I added preamble and postamble code to the benign functions to check if a critical function was waiting, and if so to wait at the same mutex until the critical section is done.
In pseudocode the pre/post-amble functions SHOULD look like:
benignPreamble(table) {
if (table->criticalIsRunning) {
waitUntilSignal;
}
incrementUserCount(table);
}
benignPostamble(table) {
decrementUserCount(table);
}
criticalPreamble(table) {
table->criticalIsRunning = YES;
waitUntilZero(table->users);
}
criticalPostamble(table) {
table->criticalIsRunning = NO;
signalCriticalDone();
}
My actual code is shown at the bottom of this question and uses (perhaps unnecessarily) caf's PriorityLock from this SO question. My implementation, quite frankly, smells awful. What is a better way to handle this situation? At the moment I'm looking for a way to signal to a mutex that it is irrelevant and 'unlock all waiting threads' simultaneously, but I keep thinking there must be a simpler way. I am trying to code it in such a way that any thread-safety mechanisms are 'ignored' if the critical process is not running.
Current Code
void startBenign(HashTable *table) {
// Ignores if critical process can't be running (users >= 1)
if (table->users == 0) {
// Blocks if critical process is running
PriorityLockLockLow(&(table->lock));
PriorityLockUnlockLow(&(table->lock));
}
__sync_add_and_fetch(&(table->users), 1);
}
void endBenign(HashTable *table) {
// Decrement user count (baseline is 1)
__sync_sub_and_fetch(&(table->users), 1);
}
int startCritical(HashTable *table) {
// Get the lock
PriorityLockLockHigh(&(table->lock));
// Decrement user count BELOW baseline (1) to hit zero eventually
__sync_sub_and_fetch(&(table->users), 1);
// Wait for all concurrent threads to finish
while (table->users != 0) {
usleep(1);
}
// Once we have zero users (any new ones will be
// held at the lock) we can proceed.
return 0;
}
void endCritical(HashTable *table) {
// Increment back to baseline of 1
__sync_add_and_fetch(&(table->users), 1);
// Unlock
PriorityLockUnlockHigh(&(table->lock));
}
It looks like you're trying to reinvent the reader-writer lock, which I believe pthreads provides as a primitive. Have you tried using that?
More specifically, your benign functions should be taking a "reader" lock, while your critical functions need a "writer" lock. The end result will be that as many benign functions can execute as desired, but when a critical function starts executing it will wait until no benign functions are in process, and will block additional benign functions until it has finished. I think this is what you want.
I have a function say void *WorkerThread ( void *ptr).
The function *WorkerThread( void *ptr) has infinite loop which reads and writes continously from Serial Port
example
void *WorkerThread( void *ptr)
{
while(1)
{
// READS AND WRITE from Serial Port USING MUXTEX_LOCK AND MUTEX_UNLOCK
} //while ends
}
The other function I worte is ThreadTest
example
int ThreadTest()
{
pthread_t Worker;
int iret1;
pthread_mutex_init(&stop_mutex, NULL);
if( iret1 = pthread_create(&Worker, NULL, WorkerThread, NULL) == 0)
{
pthread_mutex_lock(&stop_mutex);
stopThread = true;
pthread_mutex_unlock(&stop_mutex);
}
if (stopThread != false)
stopThread = false;
pthread_mutex_destroy(&stop_mutex);
return 0;
}
In main function
I have something like
int main(int argc, char **argv)
{
fd = OpenSerialPort();
if( ConfigurePort(fd) < 0) return 0;
while (true)
{
ThreadTest();
}
return 0;
}
Now, when I run this sort of code with debug statement it runs fine for few hours and then throw message like "can't able to create thread" and application terminates.
Does anyone have an idea where I am making mistakes.
Also if there is way to run ThreadTest in main with using while(true) as I am already using while(1) in ThreadWorker to read and write infinitely.
All comments and criticism are welcome.
Thanks & regards,
SamPrat.
You are creating threads continually and might be hitting the limit on number of threads.
Pthread_create man page says:
EAGAIN Insufficient resources to create another thread, or a system-imposed
limit on the number of threads was encountered. The latter case may
occur in two ways: the RLIMIT_NPROC soft resource limit (set via
setrlimit(2)), which limits the number of process for a real user ID,
was reached; or the kernel's system-wide limit on the number of
threads, /proc/sys/kernel/threads-max, was reached.
You should rethink of the design of your application. Creating an infinite number of threads is not a god design.
[UPDATE]
you are using lock to set an integer variable:
pthread_mutex_lock(&stop_mutex);
stopThread = true;
pthread_mutex_unlock(&stop_mutex);
However, this is not required as setting an int is atomic (on probably all architectures?). You should use a lock when you are doing not-atomic operations, eg: test and set
take_lock ();
if (a != 1)
a = 1
release_lock ();
You create a new thread each time ThreadTest is called, and never destroy these threads. So eventually you (or the OS) run out of thread handles (a limited resource).
Threads consume resources (memory & processing), and you're creating a thread each time your main loop calls ThreadTest(). And resources are finite, while your loop is not, so this will eventually throw a memory allocation error.
You should get rid of the main loop, and make ThreadTest return the newly created thread (pthread_t). Finally, make main wait for the thread termination using pthread_join.
Your pthreads are zombies and consume system resources. For Linux you can use ulimit -s to check your active upper limits -- but they are not infinite either. Use pthread_join() to let a thread finish and release the resources it consumed.
Do you know that select() is able to read from multiple (device) handles ? You can also define a user defined source to stop select(), or a timeout. With this in mind you are able to start one thread and let it sleeping if nothing occurs. If you intent to stop it, you can send a event (or timeout) to break the select() function call.
An additional design concept you have to consider is message queues to share information between your main application and/or pthread. select() is compatible with this technique so you can use one concept for data sources (devices and message queues).
Here a reference to a good pthread reading and the best pthread book available: Programming with POSIX(R) Threads, ISBN-13:978-0201633924
Looks like you've not called pthread_join() which cleans up state after non-detached threads are finished. I'd speculate that you've hit some per process resource limit here as a result.
As others have noted this is not great design though - why not re-use the thread rather than creating a new one on every loop?