I've been reading through and attempting to apply Tyler Hoffman's C/C++ unit testing strategies.
He offers the following as a way to fake a mutex:
#define NUM_MUTEXES 256
typedef struct Mutex {
uint8_t lock_count;
} Mutex;
static Mutex s_mutexes[NUM_MUTEXES];
static uint32_t s_mutex_index;
// Fake Helpers
void fake_mutex_init(void) {
memset(s_mutexes, 0, sizeof(s_mutexes));
}
bool fake_mutex_all_unlocked(void) {
for (int i = 0; i < NUM_MUTEXES; i++) {
if (s_mutexes[i].lock_count > 0) {
return false;
}
}
return true;
}
// Implementation
Mutex *mutex_create(void) {
assert(s_mutex_index < NUM_MUTEXES);
return &s_mutexes[s_mutex_index++];
}
void mutex_lock(Mutex *mutex) {
mutex->lock_count++;
}
void mutex_unlock(Mutex *mutex) {
mutex->lock_count--;
}
For a module that has functions like:
#include "mutex/mutex.h"
static Mutex *s_mutex;
void kv_store_init(lfs_t *lfs) {
...
s_mutex = mutex_create();
}
bool kv_store_write(const char *key, const void *val, uint32_t len) {
mutex_lock(s_mutex); // New
...
mutex_unlock(s_mutex); // New
analytics_inc(kSettingsFileWrite);
return (rv == len);
}
After being setup in a test like:
TEST_GROUP(TestKvStore) {
void setup() {
fake_mutex_init();
...
}
...
}
I'm confused about a couple things:
How does fake_mutex_init() cause methods using mutex_lock and mutex_unlock to use the fake lock and unlock?
How does this actually fake mutex locking? Can I produce deadlocks with these fakes? Or, should I just be checking the lock count in my tests?
How does fake_mutex_init() cause methods using mutex_lock and mutex_unlock to use the fake lock and unlock?
It doesn't. It's unrelated.
In the tutorial, the tests are linked with one of the implementations. In the case of this specific test, it is linked with this fake mutex implementation.
How does this actually fake mutex locking?
It just increments some integers inside an array. There is no "locking" involved in any way.
Can I produce deadlocks with these fakes?
No, because there is locking, there is no waiting, so there are no deadlocks, which occur when two threads wait for each other.
Or, should I just be checking the lock count in my tests?
Not, in "tests" - for tests mutex is an abstract thing.
Adding assert(mutex->lock_count > 0) to unlock to check if your tests do not unlock a mutex twice seems like an obvious improvement.
Related
I'm creating a timer function for a bit of embedded code that will allow me to bypass certain GPIO checks while a certain process is running, i.e., when the timer is running in a non-blocking manner.
This seems to run just fine the first 11 times the operations occur, but every time, on the 11th iteration the system will crash. The likely culprit is something in how the timer thread is being handled. My guess is there's some bit of memory cleanup that I'm not handling properly and that's leading to memory leaks of some kind. But I'm really not sure.
I can see through debug tracing that the thread is exiting after each iteration.
Here is the timer code:
#include <time.h>
#include <semaphore.h>
#include <pthread.h>
#include <msp432e4_timer.h>
extern void TaskSleep(uint32_t delay);
static bool timerActive;
static sem_t timerSem;
pthread_t timerThread;
pthread_attr_t attrs;
struct sched_param priParam;
static void *msp432e4_timer(void *argUnused) {
sem_wait(&timerSem);
timerActive = true;
sem_post(&timerSem);
TaskSleep(40);
sem_wait(&timerSem);
timerActive = false;
sem_post(&timerSem);
return (NULL);
}
void initTimer() {
int retc;
pthread_attr_init(&attrs);
priParam.sched_priority = 1;
retc = pthread_attr_setschedparam(&attrs, &priParam);
retc |= pthread_attr_setdetachstate(&attrs, PTHREAD_CREATE_DETACHED);
retc |= pthread_attr_setstacksize(&attrs, 1024);
if (retc != 0) {
// failed to set attributes
while (1) {}
}
timerActive = false;
if((sem_init(&timerSem, 0, 0)) != 0) {
while(1);
}
sem_post(&timerSem);
}
/*
* return true on starting a new timer
* false implies timer already active
*/
void timerStart() {
int retc;
retc = pthread_create(&timerThread, &attrs, msp432e4_timer, NULL);
if (retc != 0) {
// pthread_create() failed
while (1) {}
}
}
/* return true if timer active */
bool timerCheck() {
bool retval;
sem_wait(&timerSem);
retval = timerActive;
sem_post(&timerSem);
return(retval);
}
The TaskSleep function is a call to a freeRTOS TaskDelay function. It's used in many points throughout the system and has never been an issue.
Hopefully someone can point me in the right direction.
But you didn't really post enough of your code to determine where the problems might be, but I thought this might be worth mentioning:
A general problem is that the sample code you have is open loop wrt thread creation; that is there is nothing to throttle it, and if your implementation has a particularly slow thread exit handling, you could have many zombie threads lying around that haven't died yet.
In typical embedded / real time systems, you want to move resource allocation out of the main loop, since it is often non deterministic. So, more often you would create a timer thread, and park it until it is needed:
void *TimerThread(void *arg) {
while (sem_wait(&request) == 0) {
msp432e4_timer(void *arg);
}
return 0
}
void TimerStart(void) {
sem_post(&request);
}
For example, I have a variable
struct sth {
int abcd;
pthread_mutex_t mutex;
};
I have two method
setSth(int a) {
lock();
sth->abcd = a;
unlock();
}
getSth() {
return sth->abcd;
}
sth will never be freed. Is it safe not use lock/unlock in getSth()? I don't care accuracy.
By safe I mean there's no segfault thing.
Hi, I wanted to ask what is the best solution for the following problem. (explained below)
I have following memory library code (simplified):
// struct is opaque to callee
struct memory {
void *ptr;
size_t size;
pthread_mutex_t mutex;
};
size_t memory_size(memory *self)
{
if (self == NULL) {
return 0;
}
{
size_t size = 0;
if (pthread_mutex_lock(self->mutex) == 0) {
size = self->size;
(void)pthread_mutex_unlock(self->mutex);
}
return size;
}
}
void *memory_beginAccess(memory *self)
{
if (self == NULL) {
return NULL;
}
if (pthread_mutex_lock(self->mutex) == 0) {
return self->ptr;
}
return NULL;
}
void memory_endAccess(memory *self)
{
if (self == NULL) {
return;
}
(void)pthread_mutex_unlock(self->mutex);
}
The problem:
// ....
memory *target = memory_alloc(100);
// ....
{
void *ptr = memory_beginAccess(target);
// ^- implicit lock of internal mutex
operationThatNeedsSize(ptr, memory_size(target));
// ^- implicit lock of internal mutex causes a deadlock (with fastmutexes)
memory_endAccess(target);
// ^- implicit unlock of internal mutex (never reached)
}
So, I thought of three possible solutions:
1.) Use a recursive mutex. (but I heard this is bad practice and should be avoided whenever possible).
2.) Use different function names or a flag parameter:
memory_sizeLocked()
memory_size()
memory_size(TRUE) memory_size(FALSE)
3.) Catch if pthread_mutex_t returns EDEADLK and increment a deadlock counter (and decrement on unlock) (Same as recursive mutex?)
So is there another solution for this problem? Or is one of the three solutions above "good enough" ?
Thanks for any help in advance
Use two versions of the same function, one that locks and the other that doesn't. This way you will have to modify the least amount of code. It is also logically correct since, you must know when you are in a critical part of the code or not.
I'm having a problem with my C code where I declare a static int variable (as a flag), then initialize it to -1 in init() which is only called once, then when I try to update the value to 0 or 1 later on, it keeps reverting back to -1.
Does anyone know what the problem can be?
I don't have any local variables with the same identifier so I'm really lost.
Thanks!
static int previousState;
void init()
{
previousState = -1;
}
void moveForward(int currentState)
{
if (previousState == -1)
previousState = currentState;
if (previousState != currentState)
{
/* do stuff */
/* PROBLEM: it never goes into here, because previousState is always -1! */
}
/* other code */
}
void main()
{
init();
if (fork() == 0)
{
/* do stuff */
moveForward(1);
exit();
}
/* more forks */
moveForward(0);
exit();
}
Each process calls moveForward just once. Processes do not share static data!
Use threads, or use shared memory. Also use mutex or semaphore for concurrent access of shared data . Preferably switch to a language better suited for parallel prosessing...
In The Tools We Work With the author of software Varnish expressed his disappointment to the new ISO C standard draft. Especially he thinks there should be something useful like "assert I'm holding this mutex locked" function, and he claims he wrote one in Vanish.
I checked code. It essentially like this:
struct ilck {
unsigned magic;
pthread_mutex_t mtx;
int held;
pthread_t owner;
VTAILQ_ENTRY(ilck) list;
const char *w;
struct VSC_C_lck *stat;
};
void Lck__Lock(struct ilck *ilck, const char *p, const char *f, int l)
{
if (!(params->diag_bitmap & 0x18)) {
AZ(pthread_mutex_lock(&ilck->mtx));
AZ(ilck->held);
ilck->stat->locks++;
ilck->owner = pthread_self();
ilck->held = 1;
return;
}
r = pthread_mutex_trylock(&ilck->mtx);
assert(r == 0 || r == EBUSY);
if (r) {
ilck->stat->colls++;
if (params->diag_bitmap & 0x8)
VSL(SLT_Debug, 0, "MTX_CONTEST(%s,%s,%d,%s)",
p, f, l, ilck->w);
AZ(pthread_mutex_lock(&ilck->mtx));
} else if (params->diag_bitmap & 0x8) {
VSL(SLT_Debug, 0, "MTX_LOCK(%s,%s,%d,%s)", p, f, l, ilck->w);
}
ilck->stat->locks++;
ilck->owner = pthread_self();
ilck->held = 1;
}
void
Lck__Assert(const struct ilck *ilck, int held)
{
if (held)
assert(ilck->held &&
pthread_equal(ilck->owner, pthread_self()));
else
assert(!ilck->held ||
!pthread_equal(ilck->owner, pthread_self()));
}
I omit the implementation of the try-lock and unlock operation since they are basically routine. The place where I have question is the Lck__Assert(), in which the access to ilck->held and lick->owner is not protected by any mutex.
So say the following sequence of event:
Thread A locks a mutex.
Thread A unlocks it.
Thread B locks the same mutex. In the course of locking (within
Lck_lock()), thread B is preempted after it updates ilck->held but
before it updates ilck->owner. That should be possible because of
the optimizer and CPU out-of-order.
Thread A runs and invoke the Lck__Assert(), the assertion will be
true and thread A in fact doesn't hold the mutex.
In my opinion there should be some "global" mutex to protect the mutex's its own data, or at least some write/read barrier. Is my analysis correct?
I have contacted the author. He says my analysis is valid and using a memset(,0,) to unset the "thread_t owner" as thread_t is not a transparent struct with specified assignment operator. Hope that works on most platforms.