I have a program that enables multiple threads to insert entries into a hashtable and retrieve them. The hashtable itself is a very simple implementation with a struct defining each bucket entry and a table (array) to hold each bucket. I'm very to new to concurrency and multithreading, but I think that in order to avoid data from being lost in the table during insert and read operations, some kind of synchronization (in the form of something like mutex locking) needs to be added to avoid preemption on one process's data operation by another's.
In practice though, I'm not really sure how to tell where a process could be preempted in either a data read or write operation on the hashtable and where exactly locks should be placed to avoid such problems as well as dead locks. As per this website, for the hashtable insert method, I added a mutex lock before each key gets inserted into the table and unlock it at the end of the function. I essentially do something similar in the function where I'm reading data from the hash table and when I run the code, it seems that the keys are successfully being inserted initially, but the program hangs when the keys are supposed to be retrieved. Here is how I implemented the locking for each function:
// Inserts a key-value pair into the table
void insert(int key, int val) {
pthread_mutex_lock(&lock);
int i = key % NUM_BUCKETS;
bucket_entry *e = (bucket_entry *) malloc(sizeof(bucket_entry));
if (!e) panic("No memory to allocate bucket!");
e->next = table[i];
e->key = key;
e->val = val;
table[i] = e;
pthread_mutex_unlock(&lock);
pthread_exit(NULL);
}
// Retrieves an entry from the hash table by key
// Returns NULL if the key isn't found in the table
bucket_entry * retrieve(int key) {
pthread_mutex_lock(&lock);
bucket_entry *b;
for (b = table[key % NUM_BUCKETS]; b != NULL; b = b->next) {
if (b->key == key) return b;
}
pthread_mutex_unlock(&lock);
pthread_exit(NULL);
return NULL;
}
So the main problems here are:
How to tell where data is being lost between each thread operation
What could cause the program to hang when the keys are being retrieved from the hashtable?
First, you should read more about pthreads. Read also pthreads(7). Notice in particular that every locking call like pthread_mutex_lock should always be later followed by a call to pthread_mutex_unlock on the same mutex (and conventionally you should adopt the discipline that each lock and unlock happens in the same block). Hence your return in the for loop of your retrieve is wrong, you should code:
bucket_entry *
retrieve(int key) {
bucket_entry *res = NULL;
pthread_mutex_lock(&lock);
for (bucket_entry *b = table[key % NUM_BUCKETS];
b != NULL; b = b->next) {
if (b->key == key)
{ res = b; break; };
}
pthread_mutex_unlock(&lock);
return res;
}
Then you could use valgrind and use a recent GCC compiler (e.g. 5.2 in November 2015). Compile with all warnings & debug info (gcc -Wall -Wextra -g -pthread). Read about the sanitizer debugging options, in particular consider using -fsanitize=thread
There are few reasons to call pthread_exit (likewise, you rarely call exit in a program). When you do, the entire current thread will be terminated.
Related
I am trying to implement a mutex in c using the fetch and increment algorithm (sort of like the bakery algorithm). I have implemented the fetch and add part atomically. I have every thread obtain a ticket number and wait for their number to be "displayed". However, I have not found a way to tackle the issue of waiting for your ticket to be displayed. I have thought of using a queue to store your thread ID and descheudle/yield yourself until someone who has the lock, wakes you up. However, I would need a lock for the queue as well! :(
Are there any recommendations on what I could do to make the queue insertion safe or perhaps a different approach to using a queue?
Here is some code of my initial implementation:
void mutex_lock( mutex_t *mp ) {
while (compareAndSwap(&(mp->guard), 0, 1) == 1) {
// This will loop for a short period of time, Need to change this <--
}
if ( mp->lock == 1 ) {
queue_elem_t elem;
elem.data.tid = gettid();
enq( &(mp->queue), &(elem) );
mp->guard = 0;
deschedule();
}
else {
mp->lock = 1; // Lock the mutex
mp->guard = 0; // Allow others to enq themselves
}
}
Also, lets for now ignore the potential race condition where someone can call make_runnable before you call deschedule, I can write another system call that will say we are about to deschedule so queue make_runnable calls.
Trying to pass a struct between threads in plain C using reference counting. I have pthreads and gcc atomics available. I can get it to work, but I'm looking for bulletproof.
At first, I used a pthread mutex owned by the struct itself:
struct item {
int ref;
pthread_mutex_t mutex;
};
void ref(struct item *item) {
pthread_mutex_lock(&item->mutex);
item->ref++;
pthread_mutex_unlock(&item->mutex);
}
void unref(struct item *item) {
pthread_mutex_lock(&item->mutex);
item->ref--;
pthread_mutex_unlock(&item->mutex);
if (item->ref <= 0)
free(item);
}
struct item *alloc_item(void) {
struct item *item = calloc(1, sizeof(*item));
return item;
}
But, realized the mutex shouldn't be owned by the item:
static pthread_mutex_t mutex;
struct item {
int ref;
};
void ref(struct item *item) {
pthread_mutex_lock(&mutex);
item->ref++;
pthread_mutex_unlock(&mutex);
}
void unref(struct item *item) {
pthread_mutex_lock(&mutex);
item->ref--;
if (item->ref <= 0)
free(item);
pthread_mutex_unlock(&mutex);
}
struct item *alloc_item(void) {
struct item *item = calloc(1, sizeof(*item));
return item;
}
Then, further realized pointers are passed by value, so I now have:
static pthread_mutex_t mutex;
struct item {
int ref;
};
void ref(struct item **item) {
pthread_mutex_lock(&mutex);
if (item != NULL) {
if (*item != NULL) {
(*item)->ref++;
}
}
pthread_mutex_unlock(&mutex);
}
void unref(struct item **item) {
pthread_mutex_lock(&mutex);
if (item != NULL) {
if (*item != NULL) {
(*item)->ref--;
if ((*item)->ref == 0) {
free((*item));
*item = NULL;
}
}
}
pthread_mutex_unlock(&mutex);
}
struct item *alloc_item(void) {
struct item *item = calloc(1, sizeof(*item));
if (item != NULL)
item->ref = 1;
return item;
}
Are there any logical missteps here? Thanks!
I don't know of a general purpose solution.
It would be nice to be able to reduce this down to an atomic add/subtract of the reference count. Indeed, most of the time that is all that is required... so stepping through a mutex or whatever hurts.
But the real problem is managing the reference count and the pointer to the item, at the same time.
When a thread comes to ref() an item, how does it find it ? If it doesn't already exist, presumably it must create it. If it does already exist, it must avoid some other thread freeing it before the reference-count is incremented.
So... your void ref(struct item** item) works on the basis that the mutex protects the struct item** pointer... while you hold the mutex, no other thread can change the pointer -- so only one thread can create the item (and increment the count 0->1), and only one thread can destroy the item (after decrementing the count 1->0).
It is said that many problems in computer science can be solved by introducing a new level of indirection, and that is what is going on here. The problem is how do all the threads obtain the address of the item -- given that it may (softly and suddenly) vanish away ? Answer: invent a level of indirection.
BUT, now we are assuming that the pointer to the item cannot itself vanish. This can be trivially achieved if the pointer to the item can be held a process global (static storage duration). If the pointer to the item is (part of) an allocated storage duration object, then we must ensure that this higher level object is somehow locked -- so that the address of the pointer to the item is "stable" while it is in use. That is, the higher level object won't move around in memory and won't be destroyed while we are using it !
So, the checks if (item == NULL) after locking the mutex are suspect. If the mutex also protects the pointer to the item, then that mutex needs to have been locked before establishing the address of the pointer to the item -- and in this case checking after the lock is too late. Or the address of the pointer to the item is protected in some other way (perhaps by another mutex) -- and in this case the check can be done before the lock (and moving it there makes it clear what the mutex protects, and what it does not protect).
However, if the item is part of a larger data structure, and that structure is locked, you may (well) not need a lock to cover the pointer to the item at all. It depends... as I said, I'm not aware of a general solution.
I have some large, dynamic data structures (hash tables, queues, trees, etc.) which are shared by a number of threads. Mostly, threads look up and hold on to items for some time. When the system is busy, it is very busy, and the destruction of items can be deferred until things are quieter. So I use read/write locks on the large structures, atomic add/subtract for the reference counts, and a garbage collector to do the actual destruction of items. The point here is that the choice of mechanism for the (apparently simple and self contained) increment/decrement of the reference count, depends on how the creation and destruction of items is managed, and how threads come to be in possession of a pointer to an item (which is what the reference count counts, after all).
If you have 128-bit atomic operation to hand, you can put a 64-bit address and a 64-bit reference count together and do something along the lines of:
ref: bar = fetch_add(*foo, 1) ;
ptr = bar >> 64 ;
if (ptr == NULL)
{
if (bar & 0xF...F)
...create item etc.
else
...wait for item
} ;
unref: bar = fetch_sub(*foo, 1) ;
if ((bar & 0xF...F) == 0)
{
if (cmp_xchg(*foo, bar, (NULL << 64) | 0)))
...free(bar >> 64) ;
} ;
where foo is the 128-bit combined ptr/ref-count (whose existence is protected by some external means) -- assuming 64-bit ptr and 64-bit count -- and bar is a local variable of that form, and ptr is a void*.
If finding the pointer NULL triggers the item creation, then the first thread to move the count from 0->1 knows who they are, and any threads that arrive before the item is created, and the pointer set, also know who thet are and can wait. Setting the pointer requires a cmp_xchg(), and the creator then discovers how many threads are waiting for same.
This mechanism moves the reference count out of the item, and bundles it with the address of the item, which seems neat enough -- though you now need the address of the item when operating on that, and the address of the reference to the item when you are operating on its reference count.
This replaces the mutex in your ref and unref functions... but does NOT solve the problem of how the reference itself is protected.
I'm testing an idea for detailed error handling, and want to enable a thread to have the ability to call a 'getlasterror' function when it needs to work with the error. I'm using a cheap and simple pointer-to-pointers for the structs, but also make use of the pthread_t id to overwrite a previous entry (if the error info was not needed or has been processed).
From the stackoverflow posts How do you query a pthread to see if it is still running? and How do I determine if a pthread is alive?, it seems using pthread_kill to send a fake signal is potentially unsafe. Is there really no alternative mechanism to check if a pthread with an id exists or not? Or can I disable the ability for thread ids to be reused at runtime? (I'm aware the latter may be a security issue...)
I'd not previously written any code, but I whipped up roughly what my plan would look like below in leafpad (so ignore any syntax errors, if any!). Point of interest is naturally the dynamic cleanup, there's no problem if the application is closing. Any other alternative ideas would also be welcome :)
If applicable, this will be a client/server program, hence a new thread will exist with each accept().
struct error_info_structs
{
struct error_info** errs; // error_info struct with details
pthread_t** tids; // thread ids for each struct
uint32_t num; // number of error_info structs and thread ids
pthread_mutex_lock lock; // runtime locker
};
struct error_info_structs g_errs;
// assume we've done necessary initialization...
struct error_info*
get_last_runtime_error()
{
struct error_info* retval = NULL;
pthread_t tid = pthread_self();
pthread_mutex_lock(&g_errs.lock);
for ( uint32_t i = 0; i < g_errs.num; i++ )
{
if ( pthread_equal(g_errs.tids[i], tid) )
{
retval = g_errs.errs[i];
goto release_lock;
}
}
release_lock:
pthread_mutex_unlock(&g_errs.lock);
return retval;
}
void
raise_runtime_error(struct error_info* ei)
{
pthread_t tid = pthread_self();
pthread_mutex_lock(&g_errs.lock);
for ( uint32_t i = 0; i < g_errs.num; i++ )
{
if ( pthread_equal(g_errs.tids[i], tid) )
{
// replace existing
memcpy(&g_errs.errs[i], ei, sizeof(error_info));
goto release_lock;
}
/*
* Dynamic cleanup to lower risk of resource exhaustion.
* Do it here, where we actually allocate the memory, forcing
* this to be processed at least whenever a new thread raises
* an error.
*/
if ( pthread_kill(g_errs.tids[i], 0) != 0 )
{
// doesn't exist, free memory. safe to adjust counter.
free(g_errs.errs[i]);
free(g_errs.tids[i]);
g_errs.num--;
}
}
/*
* first error reported by this thread id. allocate memory to hold its
* details, eventually free when thread no longer exists.
*/
struct error_info* newei = malloc(sizeof(struct error_info));
if ( newei == NULL )
{
goto release_lock;
}
pthread_t* newt = malloc(sizeof(pthread_t));
if ( newt == NULL )
{
free(newei);
goto release_lock;
}
// realloc-bits omitted
g_errs.errs[g_errs.num] = newei;
g_errs.tids[g_errs.num] = newt;
g_errs.num++;
release_lock:
pthread_mutex_unlock(&g_errs.lock);
}
... can I disable the ability for thread ids to be reused at runtime?
No, you can't.
I want to make every element in an array of structure thread safe by using mutex lock for accessing each element of array.
This is my structure:
typedef struct {
void *value;
void *key;
uint32_t value_length;
uint32_t key_length;
uint64_t access_count;
void *next;
pthread_mutex_t *mutex;
} lruc_item;
I have an array of this structure, and want to use mutex locks in order to make structure elements thread safe.
I tried using the lock on one of the array element in a function and then intensionally didn't unlock it, just to ensure that my locks are working fine, but the strange thing was that there was no deadlock and the 2nd function accessing the same array element was able to access it.
Can some one please guide me on how to use mutexes to lock every element in a structure array (so as to make each element of the struture thread safe).
sample code to explain my point:
/** FUNCTION THAT CREATES ELEMENTS OF THE STRUCTURE **/
lruc_item *create_item(lruc *cache) {
lruc_item *item = NULL;
item = (lruc_item *) calloc(sizeof(lruc_item), 1);
item->mutex = (pthread_mutex_t *) malloc(sizeof(pthread_mutex_t));
if(pthread_mutex_init(item->mutex, NULL)) {
perror("LRU Cache unable to initialise mutex for page");
return NULL;
}
}
return item;
}
set()
{
item = create_item(cache);
pthread_mutex_lock(item->mutex);
item->value = value;
item->key = key;
item->value_length = value_length;
item->key_length = key_length;
item->access_count = ++cache->access_count;
pthread_mutex_unlock(item->mutex); /** (LINE P) tried commenting out this to check proper working of mutex(deadlock expected if the same "item" is accessed in another function) **/
}
get(lruc_item *item)
{
pthread_mutex_lock(item->mutex); /** deadlock doesn't occur when "LINE P" is commented out**/
*value = item->value;
item->access_count = ++cache->access_count;
pthread_mutex_unlock(item->mutex);
}
It's important to note that a mutex only locks out code from other threads. If you tried to execute WaitForMultipleObjects with the same mutex in the same thread it wouldn't block. I'm assuming Windows, because you haven't detailed that.
But, if you provide more detail, maybe we can pin-point where the issue really is.
Now, assuming again Windows, if you want to make accesses to the individual elements "thread-safe", you might want to consider the InterlockedExchange-class of functions instead of a mutex. For example:
InterlockExchange(&s.value_length, newValue);
or
InterlockedExchange64(&s.access_count, new64Value);
or
InterlockedExchangePointer(&s.value, newPointer);
If what you want to do is make sure multiple element accesses to the structure, as a transaction, is thread-safe, then mutex can do that for you. Mutex is useful across process boundaries. If you are only dealing within a single process, a critical section might be a better idea.
I want to update the Volume to each #IP. So that for example after each 5 s I add V(i) of each #IP(i). Ok Now the hash table works fine it keeps updated after every T seconds. But the problem is that after a certain period I find that sometimes the same ip adress is repeated twice or even a lot of times within the hash table. So that when I close the process I find the same #IP repeated too many times. It is like there is a problem with the hash table or something like that.
Here is the code this funcion "update_hashTable()" is so important it is called every X seconds I suspect in fact a memory leak ... because I always call malloc for IP#.
but it keeps working ... any idea ???
int update_hashTable( ... ) {
u_int32_t *a;
... //declarations
struct pf_addr *as;
as = ks->addr[0];
a = (u_int32_t*)malloc(sizeof(u_int32_t));
*a = ntohl(as->addr32[0]);
sz = value; // no matter it is... an int for example
if (ReturnValue=(u_int32_t)g_hash_table_lookup(hashtable, a)) {
ReturnValue +=sz;
g_hash_table_insert(hashtable, (gpointer)a, gpointer)ReturnValue);
}
else {
g_hash_table_insert(hashtable, (gpointer)a, (gpointer)sz);
}
Indeed, you appear to have a memory leak, but this isn't your problem. The problem is that the true-path of your if statement simply reinserts a second value associated with the same key, which is not what you want.
The typical pattern for this check-if-exists and increment algorithm is usually something like
gpointer val = g_hash_table_lookup(hash_table, key);
if (val == NULL) {
val = g_malloc0(...);
g_hash_table_insert(hash_table, key, val);
}
*val = /* something */;
The important thing to take away from this is that once you have a pointer to the value associated with some key, you can simply modify it directly.
If this code will be executed by multiple threads in parallel, then the entire block should be protected by a mutex, perhaps with GMutex: http://developer.gnome.org/glib/2.28/glib-Threads.html
gcc provides atomic builtin intrinsics, say for atomically incrementing the value, see http://gcc.gnu.org/onlinedocs/gcc/Atomic-Builtins.html