I've done my share of reading of condition variables, and I am simply stuck not being able to comprehend on how to use them.
I have a tree, who as of now, when you make an insertion for a node which already exists, it returns 0, which implies it already exists hence failure.
I now want to extend pthreads support, where rather than mentioning it can not be done since it already exists and returns 0, I want it to be on a waiting queue, and once the requested node is deleted, go ahead and now insert.
For example,
Suppose a tree has 3 nodes, with value 1, 5, 10
If I want to insert a new node with value 10, rather than returning 0 and throwing an error that the value already exists, it should wait for the node with the value 10 to delete, once it is deleted, it should go back ahead and insert.
My insert function else block, which returns 0 after it has previously inspected that the node exists with that value, you can be assured that the logic of knowing it exists works fine, now I am simply trying to add the conditional variable support where it waits. The datafield condition is initialized at the first line of the insert, so that's done as well. I am now hoping that when it enters this block, the cond_wait is the only line of code which will be executed, and then it will simply wait till the delete signals it. Is my approach here right? If it is, in the delete, how do I signal it? Please help me out here, I have spent hours reading and looking at examples trying to figure this out.
Code,
else
{
//IF DUPLICATE EXISTS
pthread_mutex_lock(&m);
node->counter++;
pthread_cond_wait(&node->condition, &m);
_insert(string, strlen, ip4_address, node, NULL, NULL);
pthread_mutex_unlock(&m);
return 1;//I CHANGED IT FROM 0 to one, since if signalled, and if reached to this limit
//it was probably successful
}
Here are assumptions:
struct tree
{
... // some other data (whatever)
pthread_mutex_t mitex;
pthread_cond_t cond;
};
Helper function:
int tree_contains_value(struct tree *t, int value)
{
return ...; // returns 0 or 1
}
And here is an insertion:
void tree_insert(struct tree *t, int val)
{
pthread_mutex_lock(&t->mutex);
while (tree_contains_value(t, value))
{
pthread_cond_wait(&t->cond, &t->mutex);
}
... // do actual insert
pthread_mutex_unlock(&t->mutex);
}
And removal:
void tree_remove(struct tree *t, int val)
{
pthread_mutex_lock(&t->mutex);
... //remove value
pthread_cond_broadcast(&t->cond); // notify all wating threads if any
pthread_mutex_unlock(&t->mutex);
}
A condition variable wait must be wrapped in a loop. The loop's guard tests a condition over the shared data protected by a mutex. It makes no sense to use a condition variable as you have it.
If it makes sense to wait for the node with value 10 to be deleted before inserting it, then it is done with logic like this:
lock(mutex)
while (tree.contains(key))
wait(cond, mutex)
tree.insert(key, value)
unlock(mutex)
The other task does this:
lock(mutex)
tree.delete(key)
unlock(mutex)
broadcast(cond) // could be in the mutex, but better outside!
When C. A. R. Hoare invented monitors and condition variables, the original concept was a little different. Concerns about efficiency on multiple processors wasn't a concern, and so the following logic was supported:
enter(monitor);
if (tree.contains(key)) // no loop
wait(cond, monitor)
tree.insert(key, value)
leave(monitor);
There was a guarantee that when the other task signals the condition, the waiting task will be atomically transferred back to the monitor without any other task being able to seize the monitor. So for instance when a task is in the monitor and deletes node 10, and signals the condition variable, the first task waiting on that condition variable is guaranteed to immediately get the monitor. It is not that way with POSIX mutexes and conditions (for good reasons).
Another difference between mutexes and monitors is a thread does not have to hold the mutex to signal the condition variable. In fact, it is a good idea not to. Signaling a condition variable is potentially an expensive operation (trip to the kernel). Mutexes should guard critical regions which are as short as possible (just a few instructions, ideally) to minimize contention.
Yet another difference is that POSIX conditions have a pthread_cond_broadcast function which wakes up all threads waiting on a condition. This is always the correct function to use by default. In situations in which it it obvious (or can be shown that) waking up a single thread is correct, then the function pthread_cond_signal can be used to optimize the code.
Related
Because the in built list provided by Contiki doesn't fit my needs (uses too much memory) I have implemented my own list version that has been optimized for how I intend to use it.
At any one time there will be a single one of these lists that will be operated on (i.e. add/remove elements) by multiple processes/protothreads. However, additions/removals do not occur within a process/protothread block, but instead are called within a function that is called through a series of function calls initiated by a process/protothread.
For example,
void funct2()
{
// add or remove element from shared list
}
void func1()
{
// some stuff
func2();
// more stuff
}
PROCESS_THREAD(temp1, ev, data)
{
PROCESS_BEGIN();
func1();
PROCESS_END();
}
PROCESS_THREAD(temp2, ev, data)
{
PROCESS_BEGIN();
func1();
PROCESS_END();
}
As a result, I can't use Contiki's in built mechanisms to create a mutex (via using pt-sem.h) since it must appear within a process/protothread block. This is because while I could create a lock within the actual process block (see below), this will result in blocking processes for much longer than is necessary
PROCESS_THREAD(temp2, ev, data)
{
PROCESS_BEGIN();
// get lock
func1();
// release lock
PROCESS_END();
}
This is very problematic since adding and removing elements from my list is not atomic; if an interrupt occurs while removing or adding an element to the list things will not behave properly. Is it possible to easily do what I want; namely atomically add and remove elements from a list within a function call as opposed to within a process/protothread block?
You seem to be confused about what protothreads do. The protothread approach is a form of cooperative scheduling. A protothread can only yield at specific points, when asked to yield by the execution flow. An interrupt can never switch the execution flow to another protothread.
So, there are two distinct options:
1) If your list is accessed both from protothread and from interrupt handler contexts, you need to do all modifications in the list with interrupts disabled. So your lock/unlock code is disable/enable interrupts, respectively.
2) If your list is only accessed from protothreads, you don't need any locking. This is the recommended design.
Indeed, the main advantage of using protothreads is that in 99% of cases locks are not needed.
I have a problem that i can't solve.
I have to make a data structure shared by some thread, the problems are:
The thread are executed simultaneusly and they should insert data in an particular structure, but every object should be inserted in mutex esclusion, because if an object is alredy present it must not be re-inserted.
I have think of making an array where threads put the key of object they are working, if another thread wants to put the same key it should wait for the current thread finish.
so, in other words every thread execute this function for lock element:
void lock_element(key_t key){
pthread_mutex_lock(&mtx_array);
while(array_busy==1){
pthread_cond_wait(&var_array,&mtx_array);
}
array_busy=1;
if((search_insert((int)key))==-1){
// the element is present in array and i can't insert,
// and i must wait for the array to be freed.
// (i think that the problem is here)
}
array_busy=0;
pthread_cond_signal(&var_array);
pthread_mutex_unlock(&mtx_array);
}
after i finish with the object i free the key in the arry with the follow function:
void unlock_element(key_t key){
pthread_mutex_lock(&mtx_array);
while(array_busy==1){
pthread_cond_wait(&var_array,&mtx_array);
}
array_busy=1;
zeroed((int)key);
array_busy=0;
pthread_cond_signal(&var_array);
pthread_mutex_unlock(&mtx_array);
}
in this way, the result change in every execution (for example: in a first time the program insert 300 object, and in a second time insert 100 object).
Thanks for the help!
UPDATE:
#DavidSchwartz #Asthor I modified the code as follows:
void lock_element(key_t key){
pthread_mutex_lock(&mtx_array);
while((search_insert((int)key))==-1){
//wait
pthread_cond_wait(&var_array,&mtx_array);
}
pthread_mutex_unlock(&mtx_array);
}
and...
void unlock_element(key_t key){
pthread_mutex_lock(&mtx_array);
zeroed((int)key);
pthread_cond_signal(&var_array);
pthread_mutex_unlock(&mtx_array);
}
But not work.. It behaves in the same way as before.
I also noticed a strange behavior of the function search_insert(key);
int search_insert(int key){
int k=0;
int found=0;
int fre=-1;
while(k<7 && found==0){
if(array[k]==key){
found=1;
} else if(array[k]==-1) fre=k;
k++;
}
if (found==1) {
return -1; //we can't put the key in the array
}else {
if(fre==-1) exit(EXIT_FAILURE);
array[fre]=key;
return 0;
}
}
never goes in
if(found == 1)
You have a couple of options.
The simplest option is just to hold the mutex during the entire operation. You should definitely choose this option unless you have strong evidence that you need greater concurrency.
Often, it's possible to just allow more than one thread to do the work. This pattern works like this:
Acquire the mutex.
Check if the object is in the collection. If so, use the object from the collection.
Otherwise, release the mutex.
Generate the object
Acquire the mutex again.
Check if the object is in the collection. If not, add it and use the object you generated.
Otherwise, throw away the object you generated and use the one from the collection.
This may result in two threads doing the same work. That may be unacceptable in your use case either because it's impossible (some work can only be done once) or because the gain in concurrency isn't worth the cost of the duplicated work.
If nothing else works, you can go with the more complex solution:
Acquire the mutex.
Check if the object is in the collection. If so, use the object in the collection.
Check if any other thread is working on the object. If so, block on the condition variable and go to step 2.
Indicate that we are working on the object.
Release the mutex.
Generate the object.
Acquire the mutex.
Remove the indication that we are working on the object.
Add the object to the collection.
Broadcast the condition variable.
Release the mutex.
This can be implemented with a separate collection just to track which objects are in progress or you can add a special version of the object to the collection that contains a value that indicates that it's in progress.
The answer is based on assumptions as it is.
Consider this scenario. You have 2 threads trying to insert their objects. Thread 1 and thread 2 both get objects with index 0. We then present 2 possible scenarios.
A:
Thread 1 starts, grabs the mutex and proceeds to insert their object. They finish, letting the next thread through from the mutex, which is 2. Thread 1 tries to get the mutex again to release the index but is blocked as thread 2 has it. Thread 2 tries to insert their object but fails due to the index being taken, so the insert never happens. It releases the mutex and thread 1 can grab it, releasing the index. However thread 2 has already attempted and failed to insert the object it had, meaning that we only get 1 insertion in total.
B:
Second scenario. Thread 1 starts, grabs the mutex, inserts the object, releases the mutex. Before thread 2 grabs it, thread 1 again grabs it, clearing the index and releases the mutex again. Thread 2 then successfully grabs the mutex, inserts the object it had before releasing the mutex. In this scenario we get 2 inserts.
In the end, the issue lies with there being no reaction inside the if statement when a thread fails to insert an object and the thread, not doing what it is meant to. That way you get less insertions than expected.
I'm trying to write a kernel module to detect a fork bomb, and to do this, I want to add a field int descendantCount to task_struct. This is my code so far:
struct task_struct *pTask;
for_each_process(pTask)
{
struct task_struct *p;
*p = *pTask;
//trace back to every ancestor
for(p = current; p != &init_task; p->parent)
{
//increment the descendant count of p's parent
p->descendantCount = p->descendantCount +1 //want to do something like this
}
}
Basically, I'm trying to loop through every process, and for each process, go through all of it's ancestors and increment the ancestor's descendantCount, which is the field that I want to add to task_struct.
I found this, and this, but I'm just still really confused on how I would go about doing this, as I'm new to kernel programming... Should I be going to include/linux/sched.h and adding a field there? Something like this:
struct task_struct {
......
pid_t pid;
....
int descendantCount;
}
Any help would be greatly appreciated, thank you!!
It is unclear what the actual idea is - is this supposed to bo executed on fork? Regardless, the idea is wrong and the implementation is buggy regardless of how pseudocody pasted sample is.
First of all descendantCount is a name using camelCase, which makes it inconsistent with the rest of the code. A less bad name would be descendant_count.
Counter modification must use atomic operations to not lose writes or the entire thing needs to be using an exclusive lock.
The traversal uses ->parent which is subject to change with ptrace where it starts pointing to the tracer. The parent you want can be found in ->real_parent.
Except there is no RCU protection provided, thus processes can be freed as you traverse them making the loop a use-after-free.
With RCU or tasklist_lock the traversal will be safe, but nonsensical. Procesess can be reparented to init when their original parent dies, rendering your hierarchical approach broken.
Processes would have to be grouped in some manner, but parent<->child relation is unsuitable for this purpose. Somewhat working examples of such grouping are cgroups and simple stuff like uids/gids.
All in all, this does not work.
Given that it looks like you are new not only to kernel itself but also C, I can only recommend focusing on userspace for the time being.
Alright, I honestly have tried looking up "Asynchronous Functions in C" (Results are for C# exclusively), but I get nothing for C. So I'm going to ask it here, but if there are better, already asked questions on StackExchange or what-have-you, please direct me to them.
So I'm teaching myself about concurrency and asynchronous functions and all that, so I'm attempting to create my own thread pool. So far, I'm still in the planning phase of it, and I'm trying to find a clear path to travel on, however I don't want a hand-out of code, I just want a nudge in the right direction (or else the exercise is pointless).
What would be the best way to asynchronously return from a function that isn't really "ready"? In that, it will return almost immediately, even if it's currently processing the task given by the user. The "task" is going to be a callback and arguments to fit the necessary pthread_t arguments needed, although I'll work on attributes later. The function returns a struct called "Result", which contains the void * return value and a byte (unsigned char) called "ready" which will hold values 0 and 1. So while "Result" is not "ready", then the user shouldn't attempt to process the item yet. Then again, the "item" can be NULL if the user returns NULL, but "ready" lets the user know it finished.
struct Result {
/// Determines whether or not it has been processed.
unsigned char ready;
/// The return type, NULL until ready.
void *item;
};
The struct isn't really complete, but it's a basic prototype embodying what I'm attempting to do. This isn't really the issue here though, although let me know if its the wrong approach.
Next I have to actually process the thing, while not blocking until everything is finished. As I said, the function will create the Result, then asynchronously process it and return immediately (by returning this result). The problem is asynchronously processing. I was thinking of spawning another thread inside of the thread_pool, but I feel it's missing the point of a thread pool as it's not longer remaining simple.
Here's what I was thinking (which I've a feeling is grossly over-complicated). In the function add_task, spawn a new thread (Thread A) with a passed sub_process struct then return the non-processed but initialized result. In the spawned thread, it will also spawn another thread (see the problem? This is Thread B) with the original callback and arguments, join Thread A with Thread B to capture it's return value, which is then stored in the result's item member. Since the result will be pointing to the very same struct the user holds, it shouldn't be a problem.
My problem is that it spawns 2 threads instead of being able to do it in 1, so I'm wondering if I'm doing this wrong and complicating things.Is there a better way to do this? Does pthread's library have a function which will asynchronously does this for me? Anyway, the prototype Sub_Process struct is below.
/// Makes it easier than having to retype everything.
typedef void *(*thread_callback)(void *args);
struct Sub_Process {
/// Result to be processed.
Result *result;
/// Thread callback to be processed
thread_callback cb;
/// Arguments to be passed to the callback
void *args;
};
Am I doing it wrong? I've a feeling I'm missing the whole point of a Thread_Pool. Another question is, is there a way to spawn a thread that is created, but waiting and not doing anything? I was thinking of handling this by creating all of the threads by having them just wait in a processing function until called, but I've a feeling this is the wrong way to go about this.
To further elaborate, I'll also post some pseudocode of what I'm attempting here
Notes: Was recommended I post this question here for an answer, so it's been copy and pasted, lemme know if there is any faulty editing.
Edit: No longer spawns another thread, instead calls callback directly, so the extra overhead of another thread shouldn't be a problem.
I presume it is your intention is that a thread will request the asychronous work to be performed, then go on to perform some different work itself until the point where it requires the result of the asynchronous operation in order to proceed.
In this case, you need a way for the requesting thread to stop and wait for the Result to be ready. You can do this by embedding a mutex and condition variable pair inside the Result:
struct Result {
/// Lock to protect contents of `Result`
pthread_mutex_t lock;
/// Condition variable to signal result being ready
pthread_cond_t cond;
/// Determines whether or not it has been processed.
unsigned char ready;
/// The return type, NULL until ready.
void *item;
};
When the requesting thread reaches the point that it requires the asynchronous result, it uses the condition variable:
pthread_mutex_lock(&result->lock);
while (!result->ready)
pthread_cond_wait(&result->cond, &result->lock);
pthread_mutex_unlock(&result->lock);
You can wrap this inside a function that waits for the result to be available, destroys the mutex and condition variable, frees the Result structure and returns the return value.
The corresponding code in the thread pool thread when the processing is finished would be:
pthread_mutex_lock(&result->lock);
result->item = item;
result->ready = 1;
pthread_cond_signal(&result->cond);
pthread_mutex_unlock(&result->lock);
Another question is, is there a way to spawn a thread that is created,
but waiting and not doing anything? I was thinking of handling this by
creating all of the threads by having them just wait in a processing
function until called, but I've a feeling this is the wrong way to go
about this.
No, you're on the right track here. The mechanism to have the thread pool threads wait around for some work to be available is the same as the above - condition variables.
How would one go about creating a queue that can hold an array, more over an array with variable amounts of rows.
char data[n][2][50];
//Could be any non 0 n e.g:
n=1; data = {{"status","ok}};
// or
n=3; {{"lat", "180.00"},{"long","90.123"},{"status","ok}};
// and so on
n to be added to the queue. Or is there even a better solution than what I'm asking? A queue is easy enough to write (or find re-usable examples of) for single data items but I'm not sure what method I would use for the above. Maybe a struct? That would solve for array and n...but would it solve for variable array?
More broadly the problem I'm trying to solved is this.
I need to communicate with a web server using POST. I have the code for this already written however I don't want to keep the main thread busy every time this task needs doing, especially since I need to make other checks such as is the connection up, if it isn't I need to back off and wait or try and bring it back online.
My idea was to have a single separate dedicated to this task. I figured creating a queue would be the best way for the main thread to let the child thread know what to do.
The data will be a variable number of string pairs. like:
Main
//Currently does
char data[MAX_MESSAGES_PER_POST][2][50];
...
assembles array
sendFunction(ptrToArray, n);
resumes execution with large and un predicatable delay
//Hopefully will do
...
queue(what needs doing)
carry on executing with (almost) no delay
Child
while(0)
{
if(allOtherConditionsMet()) //Device online and so forth
{
if(!empty(myQueue))
{
//Do work and deque
}
}
else
{
//Try and make condition ok. Like reconect dongle.
}
// sleep/Back off for a while
}
You could use an existing library, like Glib. GLib is cross platform. If you used GLib's asynchronous queues, you'd do something like:
The first thread to create the queue executes:
GAsyncQueue *q = g_async_queue_new ();
Other threads can reference (show intent to use the queue) with:
g_async_queue_ref (q);
After this, any thread can 'push' items to the queue with:
struct queue_item i;
g_async_queue_push (q, ( (gpointer) (&i)));
And any thread can 'pop' items from the queue with:
struct queue_item *d = g_async_queue_pop (q);
/* Blocks until item is available. */
Once a thread finishes using the queue and doesn't care any more about it, it calls:
g_async_queue_unref (q);
Even the thread which created the queue needs to do this.
There are a bunch of other useful functions, which you can all read about on the page documenting them. Synchronization (locking/consistency/atomicity of operations) is taken care of by the library itself.