Is it safe to read a locked variable in shared memory? - c

For example, I have a variable
struct sth {
int abcd;
pthread_mutex_t mutex;
};
I have two method
setSth(int a) {
lock();
sth->abcd = a;
unlock();
}
getSth() {
return sth->abcd;
}
sth will never be freed. Is it safe not use lock/unlock in getSth()? I don't care accuracy.
By safe I mean there's no segfault thing.

Related

How can I fake mutex context in C/C++?

I've been reading through and attempting to apply Tyler Hoffman's C/C++ unit testing strategies.
He offers the following as a way to fake a mutex:
#define NUM_MUTEXES 256
typedef struct Mutex {
uint8_t lock_count;
} Mutex;
static Mutex s_mutexes[NUM_MUTEXES];
static uint32_t s_mutex_index;
// Fake Helpers
void fake_mutex_init(void) {
memset(s_mutexes, 0, sizeof(s_mutexes));
}
bool fake_mutex_all_unlocked(void) {
for (int i = 0; i < NUM_MUTEXES; i++) {
if (s_mutexes[i].lock_count > 0) {
return false;
}
}
return true;
}
// Implementation
Mutex *mutex_create(void) {
assert(s_mutex_index < NUM_MUTEXES);
return &s_mutexes[s_mutex_index++];
}
void mutex_lock(Mutex *mutex) {
mutex->lock_count++;
}
void mutex_unlock(Mutex *mutex) {
mutex->lock_count--;
}
For a module that has functions like:
#include "mutex/mutex.h"
static Mutex *s_mutex;
void kv_store_init(lfs_t *lfs) {
...
s_mutex = mutex_create();
}
bool kv_store_write(const char *key, const void *val, uint32_t len) {
mutex_lock(s_mutex); // New
...
mutex_unlock(s_mutex); // New
analytics_inc(kSettingsFileWrite);
return (rv == len);
}
After being setup in a test like:
TEST_GROUP(TestKvStore) {
void setup() {
fake_mutex_init();
...
}
...
}
I'm confused about a couple things:
How does fake_mutex_init() cause methods using mutex_lock and mutex_unlock to use the fake lock and unlock?
How does this actually fake mutex locking? Can I produce deadlocks with these fakes? Or, should I just be checking the lock count in my tests?
How does fake_mutex_init() cause methods using mutex_lock and mutex_unlock to use the fake lock and unlock?
It doesn't. It's unrelated.
In the tutorial, the tests are linked with one of the implementations. In the case of this specific test, it is linked with this fake mutex implementation.
How does this actually fake mutex locking?
It just increments some integers inside an array. There is no "locking" involved in any way.
Can I produce deadlocks with these fakes?
No, because there is locking, there is no waiting, so there are no deadlocks, which occur when two threads wait for each other.
Or, should I just be checking the lock count in my tests?
Not, in "tests" - for tests mutex is an abstract thing.
Adding assert(mutex->lock_count > 0) to unlock to check if your tests do not unlock a mutex twice seems like an obvious improvement.

Mutex for getter method causes deadlock

Hi, I wanted to ask what is the best solution for the following problem. (explained below)
I have following memory library code (simplified):
// struct is opaque to callee
struct memory {
void *ptr;
size_t size;
pthread_mutex_t mutex;
};
size_t memory_size(memory *self)
{
if (self == NULL) {
return 0;
}
{
size_t size = 0;
if (pthread_mutex_lock(self->mutex) == 0) {
size = self->size;
(void)pthread_mutex_unlock(self->mutex);
}
return size;
}
}
void *memory_beginAccess(memory *self)
{
if (self == NULL) {
return NULL;
}
if (pthread_mutex_lock(self->mutex) == 0) {
return self->ptr;
}
return NULL;
}
void memory_endAccess(memory *self)
{
if (self == NULL) {
return;
}
(void)pthread_mutex_unlock(self->mutex);
}
The problem:
// ....
memory *target = memory_alloc(100);
// ....
{
void *ptr = memory_beginAccess(target);
// ^- implicit lock of internal mutex
operationThatNeedsSize(ptr, memory_size(target));
// ^- implicit lock of internal mutex causes a deadlock (with fastmutexes)
memory_endAccess(target);
// ^- implicit unlock of internal mutex (never reached)
}
So, I thought of three possible solutions:
1.) Use a recursive mutex. (but I heard this is bad practice and should be avoided whenever possible).
2.) Use different function names or a flag parameter:
memory_sizeLocked()
memory_size()
memory_size(TRUE) memory_size(FALSE)
3.) Catch if pthread_mutex_t returns EDEADLK and increment a deadlock counter (and decrement on unlock) (Same as recursive mutex?)
So is there another solution for this problem? Or is one of the three solutions above "good enough" ?
Thanks for any help in advance
Use two versions of the same function, one that locks and the other that doesn't. This way you will have to modify the least amount of code. It is also logically correct since, you must know when you are in a critical part of the code or not.

thread local storage of a function

I am working on a program that requires a queue operation to be performed in multi threaded environment.
I am not sure about the thread local storage for a function, not just a global variable
i tried
__thread int head,tail;
__thread int q[MAX_NODES+2];
__thread void enqueue (int x) {
q[tail] = x;
tail++;
color[x] = GRAY;
}
__thread int dequeue () {
int x = q[head];
head++;
color[x] = BLACK;
return x;
}
I got following error
fordp.c:71: error: function definition declared '__thread'
fordp.c:77: error: function definition declared '__thread'
I read somewhere that a function is already thread safe unless its using shared variables so I tried
__thread int head,tail;
__thread int q[MAX_NODES+2];
void enqueue (int x) {
q[tail] = x;
tail++;
color[x] = GRAY;
}
int dequeue () {
int x = q[head];
head++;
color[x] = BLACK;
return x;
}
It did compile with no error, but my execution result was wrong hinting queue didnt work well with multi-threaded platform.
Can someone please explain me what is going on here??
Any help is appreciated.
__thread advises the compiler to create an instance of the variable for every thread.
I doubt that's what you want for the queue, it's head and tail the threads should concurrently operate on, as modifications done by one thread would not be visible by any other thread.
So do not use __thread here, but protect the concurrent access to the global variables, for example using one or more mutexes.
For your reference: http://en.wikipedia.org/wiki/Thread-local_storage
I think you're tackling the problem in the wrong way.
Your problem is that you want to associate a Queue object with a function call (e.g. enqueue).
In C these objects are usually referred to as contexts.
What you did is a variation of a global variable. Using per thread local storage is good for scratch space or actual per thread resources. and this is not the case.
The only option to have thread safety and correctness is to add the context to the function call.
I removed the reference to color to simplify things.
struct queue {
unsigned head, tail;
int q[MAX_NODES+2];
};
void enqueue (struct queue* q, int x) {
q->q[q->tail++] = x;
}
int dequeue (struct queue* q) {
int x = q->q[q->head++];
return x;
}
Note: you should perform checks on pointers and indexes.

Is "assert I'm holding this mutex locked" Feasible?

In The Tools We Work With the author of software Varnish expressed his disappointment to the new ISO C standard draft. Especially he thinks there should be something useful like "assert I'm holding this mutex locked" function, and he claims he wrote one in Vanish.
I checked code. It essentially like this:
struct ilck {
unsigned magic;
pthread_mutex_t mtx;
int held;
pthread_t owner;
VTAILQ_ENTRY(ilck) list;
const char *w;
struct VSC_C_lck *stat;
};
void Lck__Lock(struct ilck *ilck, const char *p, const char *f, int l)
{
if (!(params->diag_bitmap & 0x18)) {
AZ(pthread_mutex_lock(&ilck->mtx));
AZ(ilck->held);
ilck->stat->locks++;
ilck->owner = pthread_self();
ilck->held = 1;
return;
}
r = pthread_mutex_trylock(&ilck->mtx);
assert(r == 0 || r == EBUSY);
if (r) {
ilck->stat->colls++;
if (params->diag_bitmap & 0x8)
VSL(SLT_Debug, 0, "MTX_CONTEST(%s,%s,%d,%s)",
p, f, l, ilck->w);
AZ(pthread_mutex_lock(&ilck->mtx));
} else if (params->diag_bitmap & 0x8) {
VSL(SLT_Debug, 0, "MTX_LOCK(%s,%s,%d,%s)", p, f, l, ilck->w);
}
ilck->stat->locks++;
ilck->owner = pthread_self();
ilck->held = 1;
}
void
Lck__Assert(const struct ilck *ilck, int held)
{
if (held)
assert(ilck->held &&
pthread_equal(ilck->owner, pthread_self()));
else
assert(!ilck->held ||
!pthread_equal(ilck->owner, pthread_self()));
}
I omit the implementation of the try-lock and unlock operation since they are basically routine. The place where I have question is the Lck__Assert(), in which the access to ilck->held and lick->owner is not protected by any mutex.
So say the following sequence of event:
Thread A locks a mutex.
Thread A unlocks it.
Thread B locks the same mutex. In the course of locking (within
Lck_lock()), thread B is preempted after it updates ilck->held but
before it updates ilck->owner. That should be possible because of
the optimizer and CPU out-of-order.
Thread A runs and invoke the Lck__Assert(), the assertion will be
true and thread A in fact doesn't hold the mutex.
In my opinion there should be some "global" mutex to protect the mutex's its own data, or at least some write/read barrier. Is my analysis correct?
I have contacted the author. He says my analysis is valid and using a memset(,0,) to unset the "thread_t owner" as thread_t is not a transparent struct with specified assignment operator. Hope that works on most platforms.

Using Windows slim read/write lock

/*language C code*/
#include "windows.h"
typedef struct object_s
{
SRWLOCK lock;
int data;
} object_t, *object_p; /*own and pointer type*/
void thread(object_p x)
{
AcquireSRWLockExclusive(&x->lock);
//...do something that could probably change x->data value to 0
if(x->data==0)
free(x);
else
ReleaseSRWLockExclusive(&x->lock);
}
void main()
{
int i;
object_p object=(object_p)malloc(sizeof(object_t));
InitializeSRWLock(&object->lock);
for(i=0;i<3;i++)
CreateThread(0,0,thread,object,0);
}
As you can figure out in the codes above, what I have to accomplish is to let one thread conditionally free the object on which the other two may block. Codes above are obviously flawed because if object is set free along with the lock, all blocking threads give us nowhere but wrong.
A solution below
/*language C code*/
#include "windows.h"
typedef struct object_s
{
/*change: move lock to stack in main()*/
int data;
} object_t, *object_p; /*own and pointer type*/
void thread(void * x)
{
struct {
PSRWLOCK l;
object_p o;
} * _x=x;
AcquireSRWLockExclusive(_x->l);
//...do something that could probably change x->data value to 0
if(_x->o->data==0)
free(_x->o);
ReleaseSRWLockExclusive(&x->lock);
}
void main()
{
int i;
SRWLOCK lock; /*lock over here*/
object_p object=(object_p)malloc(sizeof(object_t));
InitializeSRWLock(&lock);
/*pack for thread context*/
struct
{
PSRWLOCK l;
object_p o;
} context={&lock, object};
for(i=0;i<3;i++)
CreateThread(0,0,thread,&context,0);
}
works in this case but not applicable however, in my final project because there is actually a dynamic linked list of objects. By applying this solution it means that there must be a list of locks accordingly, each lock for an object and moreover, when a certain object is set free, its lock must be set free at the same time. There is nothing new compared with the first code section.
Now I wonder if there is an alternative solution to this. Thank you very much!
The solution is to not allocate the lock together with the data. I would suggest that you move the data out of that struct and replace it with a pointer to the data. Your linked list can then free the data first, and then the node, without any problems. Here's some pseudo code:
typedef struct
{
lock_t lock;
int* data_ptr;
} something_t;
void init_something (something_t* thing, ...)
{
thing->lock = init_lock();
thing->data_ptr = malloc(...); // whatever the data is supposed to be
}
void free_something (somthing_t* thing)
{
lock(thing->lock);
free(thing->data_ptr);
thing->data_ptr = NULL;
unlock(thing->lock);
}
...
void linked_list_delete_node (...)
{
free_something(node_to_delete->thing);
free(node_to_delete);
}
...
void thread (void* x)
{
lock(x->lock);
//...do something that could probably change x->data_ptr->data... to 0
if(x->data_ptr->data == 0)
{
free_something(x->data_ptr->data);
}
unlock(x->lock);
}
AcquireSRWLockExclusive(lock);
if(_x->o->data==0)
free(_x);
ReleaseSRWLockExclusive(lock);
As a sidenote, a C program for Windows can never return void. A hosted C program must always return int. Your program will not compile on a C compiler.
Also, CreateThread() expects a function pointer to a function returning a 32-bit value and taking a void pointer as parameter. You pass a different kind of function pointer, function pointer casts aren't allowed in C, nor am I sure what sort of madness Windows will execute if it gets a different function pointer than what it expects. You invoke undefined behavior. This can cause your program to crash or behave in unexpected or random ways.
You need to change your thread function to DWORD WINAPI thread (LPVOID param);

Resources