Mutual exclusion within a function Contiki - c

Because the in built list provided by Contiki doesn't fit my needs (uses too much memory) I have implemented my own list version that has been optimized for how I intend to use it.
At any one time there will be a single one of these lists that will be operated on (i.e. add/remove elements) by multiple processes/protothreads. However, additions/removals do not occur within a process/protothread block, but instead are called within a function that is called through a series of function calls initiated by a process/protothread.
For example,
void funct2()
{
// add or remove element from shared list
}
void func1()
{
// some stuff
func2();
// more stuff
}
PROCESS_THREAD(temp1, ev, data)
{
PROCESS_BEGIN();
func1();
PROCESS_END();
}
PROCESS_THREAD(temp2, ev, data)
{
PROCESS_BEGIN();
func1();
PROCESS_END();
}
As a result, I can't use Contiki's in built mechanisms to create a mutex (via using pt-sem.h) since it must appear within a process/protothread block. This is because while I could create a lock within the actual process block (see below), this will result in blocking processes for much longer than is necessary
PROCESS_THREAD(temp2, ev, data)
{
PROCESS_BEGIN();
// get lock
func1();
// release lock
PROCESS_END();
}
This is very problematic since adding and removing elements from my list is not atomic; if an interrupt occurs while removing or adding an element to the list things will not behave properly. Is it possible to easily do what I want; namely atomically add and remove elements from a list within a function call as opposed to within a process/protothread block?

You seem to be confused about what protothreads do. The protothread approach is a form of cooperative scheduling. A protothread can only yield at specific points, when asked to yield by the execution flow. An interrupt can never switch the execution flow to another protothread.
So, there are two distinct options:
1) If your list is accessed both from protothread and from interrupt handler contexts, you need to do all modifications in the list with interrupts disabled. So your lock/unlock code is disable/enable interrupts, respectively.
2) If your list is only accessed from protothreads, you don't need any locking. This is the recommended design.
Indeed, the main advantage of using protothreads is that in 99% of cases locks are not needed.

Related

When does a C code interrupt in event driven programming mode ?

I'm new to c, and event driven programming.
We are using libevent to develop
how does interrupt work, and when does it happen?
Will it interrupt in the middle of a function, or does it always
interrupt in the end of a function?
for example,
extern int arr[100];
void some_func1() {
int flag;
// do something to change flag
if(flag == 0) {
update1(arr);
}else if(flag == 1) {
update2(arr);
}
}
void some_func2() {
// print something based on arr
}
some_func1 will be called when event1 happens, and some_func2 will be called
if event2 happens.
case 1.
First event1 occurs then some_func1 be called and finished, so arr is updated correctly, then event2 occurs, and print is ok
case 2.
First event1 occurs then some_func1 be called, and in the middle of it, another event1 is called, then arr is messed up.
Will case 2 happen? Is it possible to make some_func1 an atomic
function?
From the doc:
Dispatching events.
Finally, you call event_base_dispatch() to loop and dispatch events. You can also use event_base_loop() for more fine-grained control.
Currently, only one thread can be dispatching a given event_base at a time. If you want to run events in multiple threads at once, you can either have a single event_base whose events add work to a work queue, or you can create multiple event_base objects.
So, if you've got one thread and one event_base then event_base_dispatch()/event_base_loop() in this thread call handler functions one by one.
If you've got two threads and two event_base (one in each thread) then they work independently. The first event_base handles its events one by one in the first thread; the second event_base handles its events one by one in the second thread.
(I haven't used libevent, but that's how generally the event loops work)

NSRecursivelock v NSNotification performance

So I was using an array to hold objects, then iterating over them to call a particular method. As the objects in this array could be enumerated/mutated on different threads I was using an NSRecursiveLock.
I then realised that as this method is always called on every object in this array, I could just use an NSNotification to trigger it, and do away with the array and lock.
I am just double checking that this is a good idea? NSNotification will be faster than a lock, right?
Thanks
First, it seems that the locking should be done on the objects, not on the array. You are not mutating the array, but the objects. So you need one mutex per object. It will give you a finer granularity and allow concurrent updates to different objects to proceed in parallel (it wouldn't with a global lock).
Second, a recursive lock is complete overkill. If you wish to have mutual exclusion on the objects, each object should have a standard mutex. If what you are doing inside the critical section is cpu bound and really short, you might consider using a spinlock (it won't even trap in the OS. Only use for short CPU bound critical sections though). Recursive mutex are meant to be used when a thread k can (because of its logic) acquire another lock on an object it has already locked himself (hence the recursive name).
Third, NSNotifications (https://developer.apple.com/library/mac/documentation/Cocoa/Reference/Foundation/Classes/NSNotification_Class/) will allocate memory, drop the notification in a notification Center, do locking to actually implement that (the adding to the center) and finally dispatch the notifications in the center and deallocate the notification. So it is heavier than a plain simple locking. It "hides" the synchronization APIs inside the center, but it does not eliminate them, and you pay the price of memory allocations/deallocations.
If you wish to modify the array (add/remove), then you should also synchronize on these operations, but that would be unfortunate as access to independent entries of the array would now collide.
Hope that helps.

Advantages of a separate thread in C program

I have a capture program which in addition do capturing data and writing it into a file also prints some statistics.The function that prints the statistics
static void report(void)
{
/*Print statistics*/
}
is called roughly every second using an ALARM that expires every second.So The program is like
void capture_program()
{
while()
{
/*Main loop*/
if(doreport)
report();
}
}
The expiry of the timer sets the doreport flag.If this flag is set report() is called which clears the flag.
Now my question is
I am planning to move the reporting function to a separate thread.The main motivation to
do this is that the reporting function executes some code that is under a lock.Now if another process is holding the lock,then this will block causing the capture process to drop packets.So I think it might be a better idea to move the reporting to a thread.
2) If I were to implementing the reporting in a separate thread,should I still have to use
timers inside the thread to do reporting every second?
OR
Is there a better way to do that by making the thread wakeup at every 1 second interval
What are the advantages in moving the reporting function to a separate thread?
If your reporting function is trivial, for example, you just need to print some thing, I don't think a separate thread will help a lot.
If I were to implementing the reporting in a separate thread, should I still have to use timers inside the thread to do reporting every second?
You don't need timers, you can just go to sleep every second, like this:
static void report(void)
{
while (1) {
/*Print statistics*/
sleep(1);
}
}

posix threading in C condition variables

I've done my share of reading of condition variables, and I am simply stuck not being able to comprehend on how to use them.
I have a tree, who as of now, when you make an insertion for a node which already exists, it returns 0, which implies it already exists hence failure.
I now want to extend pthreads support, where rather than mentioning it can not be done since it already exists and returns 0, I want it to be on a waiting queue, and once the requested node is deleted, go ahead and now insert.
For example,
Suppose a tree has 3 nodes, with value 1, 5, 10
If I want to insert a new node with value 10, rather than returning 0 and throwing an error that the value already exists, it should wait for the node with the value 10 to delete, once it is deleted, it should go back ahead and insert.
My insert function else block, which returns 0 after it has previously inspected that the node exists with that value, you can be assured that the logic of knowing it exists works fine, now I am simply trying to add the conditional variable support where it waits. The datafield condition is initialized at the first line of the insert, so that's done as well. I am now hoping that when it enters this block, the cond_wait is the only line of code which will be executed, and then it will simply wait till the delete signals it. Is my approach here right? If it is, in the delete, how do I signal it? Please help me out here, I have spent hours reading and looking at examples trying to figure this out.
Code,
else
{
//IF DUPLICATE EXISTS
pthread_mutex_lock(&m);
node->counter++;
pthread_cond_wait(&node->condition, &m);
_insert(string, strlen, ip4_address, node, NULL, NULL);
pthread_mutex_unlock(&m);
return 1;//I CHANGED IT FROM 0 to one, since if signalled, and if reached to this limit
//it was probably successful
}
Here are assumptions:
struct tree
{
... // some other data (whatever)
pthread_mutex_t mitex;
pthread_cond_t cond;
};
Helper function:
int tree_contains_value(struct tree *t, int value)
{
return ...; // returns 0 or 1
}
And here is an insertion:
void tree_insert(struct tree *t, int val)
{
pthread_mutex_lock(&t->mutex);
while (tree_contains_value(t, value))
{
pthread_cond_wait(&t->cond, &t->mutex);
}
... // do actual insert
pthread_mutex_unlock(&t->mutex);
}
And removal:
void tree_remove(struct tree *t, int val)
{
pthread_mutex_lock(&t->mutex);
... //remove value
pthread_cond_broadcast(&t->cond); // notify all wating threads if any
pthread_mutex_unlock(&t->mutex);
}
A condition variable wait must be wrapped in a loop. The loop's guard tests a condition over the shared data protected by a mutex. It makes no sense to use a condition variable as you have it.
If it makes sense to wait for the node with value 10 to be deleted before inserting it, then it is done with logic like this:
lock(mutex)
while (tree.contains(key))
wait(cond, mutex)
tree.insert(key, value)
unlock(mutex)
The other task does this:
lock(mutex)
tree.delete(key)
unlock(mutex)
broadcast(cond) // could be in the mutex, but better outside!
When C. A. R. Hoare invented monitors and condition variables, the original concept was a little different. Concerns about efficiency on multiple processors wasn't a concern, and so the following logic was supported:
enter(monitor);
if (tree.contains(key)) // no loop
wait(cond, monitor)
tree.insert(key, value)
leave(monitor);
There was a guarantee that when the other task signals the condition, the waiting task will be atomically transferred back to the monitor without any other task being able to seize the monitor. So for instance when a task is in the monitor and deletes node 10, and signals the condition variable, the first task waiting on that condition variable is guaranteed to immediately get the monitor. It is not that way with POSIX mutexes and conditions (for good reasons).
Another difference between mutexes and monitors is a thread does not have to hold the mutex to signal the condition variable. In fact, it is a good idea not to. Signaling a condition variable is potentially an expensive operation (trip to the kernel). Mutexes should guard critical regions which are as short as possible (just a few instructions, ideally) to minimize contention.
Yet another difference is that POSIX conditions have a pthread_cond_broadcast function which wakes up all threads waiting on a condition. This is always the correct function to use by default. In situations in which it it obvious (or can be shown that) waking up a single thread is correct, then the function pthread_cond_signal can be used to optimize the code.

Queue of variable length array or struct

How would one go about creating a queue that can hold an array, more over an array with variable amounts of rows.
char data[n][2][50];
//Could be any non 0 n e.g:
n=1; data = {{"status","ok}};
// or
n=3; {{"lat", "180.00"},{"long","90.123"},{"status","ok}};
// and so on
n to be added to the queue. Or is there even a better solution than what I'm asking? A queue is easy enough to write (or find re-usable examples of) for single data items but I'm not sure what method I would use for the above. Maybe a struct? That would solve for array and n...but would it solve for variable array?
More broadly the problem I'm trying to solved is this.
I need to communicate with a web server using POST. I have the code for this already written however I don't want to keep the main thread busy every time this task needs doing, especially since I need to make other checks such as is the connection up, if it isn't I need to back off and wait or try and bring it back online.
My idea was to have a single separate dedicated to this task. I figured creating a queue would be the best way for the main thread to let the child thread know what to do.
The data will be a variable number of string pairs. like:
Main
//Currently does
char data[MAX_MESSAGES_PER_POST][2][50];
...
assembles array
sendFunction(ptrToArray, n);
resumes execution with large and un predicatable delay
//Hopefully will do
...
queue(what needs doing)
carry on executing with (almost) no delay
Child
while(0)
{
if(allOtherConditionsMet()) //Device online and so forth
{
if(!empty(myQueue))
{
//Do work and deque
}
}
else
{
//Try and make condition ok. Like reconect dongle.
}
// sleep/Back off for a while
}
You could use an existing library, like Glib. GLib is cross platform. If you used GLib's asynchronous queues, you'd do something like:
The first thread to create the queue executes:
GAsyncQueue *q = g_async_queue_new ();
Other threads can reference (show intent to use the queue) with:
g_async_queue_ref (q);
After this, any thread can 'push' items to the queue with:
struct queue_item i;
g_async_queue_push (q, ( (gpointer) (&i)));
And any thread can 'pop' items from the queue with:
struct queue_item *d = g_async_queue_pop (q);
/* Blocks until item is available. */
Once a thread finishes using the queue and doesn't care any more about it, it calls:
g_async_queue_unref (q);
Even the thread which created the queue needs to do this.
There are a bunch of other useful functions, which you can all read about on the page documenting them. Synchronization (locking/consistency/atomicity of operations) is taken care of by the library itself.

Resources