In the below code: get_row_of_machine receiving mon_param as a parameter thread-safe? - c

mon_param is allocated memory by the main process invoking the thread function.
This function will be invoked by multiple threads.So, can I safely assume that it is thread safe as I am using only the variables on the stack?
struct table* get_row_of_machine(int row_num,struct mon_agent *mon_param)
{
struct table *table_row = mon_param->s_table_rows;
if(row_num < mon_param->total_states)
{
table_row = table_row + row_num;
}
return table_row;
}
//in the main function code goes like this ....
int main()
{
int msg_type,ret;
while(!s_interrupted)
{
inter_thread_pair = zsock_new(ZMQ_PAIR);
if(inter_thread_pair != NULL)
zsock_bind (inter_thread_pair, "inproc://zmq_main_pair");
int ret_val = zmq_poll(&socket_items[0], 1, 0); // Do not POLL indefinitely.
if(socket_items[0].revents & ZMQ_POLLIN)
{
char *msg = zstr_recv (inter_thread_pair); //
if(msg != NULL)
{
struct mon_agent *mon_params;
//This is where mon_params is getting its memory
mon_params = (struct mon_agent*)malloc(sizeof(struct mon_agent));
msg_type = get_msg_type(msg);
if(msg_type == /*will check for some message type here*/)
{
struct thread_sock_params *thd_sock = create_connect_pair_socket(thread_count);
// copy the contents of thread_sock_params and also the mon_params to this struct
struct thread_parameters parameters;
parameters.sock_params = thd_sock;
parameters.params = mon_params; //mon_params getting copid here.
//Every time I receive a particular message, I create a new thread and pass on the parameters.
//So, each thread gets its own mon_params memory allocated.
ret = pthread_create(&thread,NULL,monitoring_thread,(void*)&parameters);
and then it goes on like this.
}
}
}
and the code continues..... there is a breakpoint somewhere down..
}
}
void* mon_thread(void *data)
{
// First time data is sent as a function parameter and later will be received as messages.
struct thread_parameters *th_param = (struct thread_parameters *)data;
struct mon_agent *mon_params = th_param->params;
zsock_t* thread_pair_client = zsock_new(ZMQ_PAIR);
//printf("Value of socket is %s: \n",th_param->socket_ep);
rc = zsock_connect(thread_pair_client,th_param->sock_params->socket_ep);
if(rc == -1)
{
printf("zmq_connect failed in monitoring thread.\n");
}
while(!s_interrupted)
{
int row;
//logic to maintain the curent row.
//also receive other messages from thread_pair_client czmq socket.
run_machine(row,mon_params);
}
}
void run_machine(int row_num, struct mon_agent *mon_params)
{
struct table* table_row = get_row_of_state_machine(row_num,mon_param);
}

In short, no.
The way to make parameters thread safe is by design.
There is no fool proof way to do this or a rule of thumb. If you know your codes design well enough and you know no other thread will access the same struct then it's possibly thread safe.
If you do know some other thread might try to access the struct you can use all sorts of synchronization primitives like mutexes, critical sections, semaphores or more generally locks.

Related

Change value of a struct

I was learning how to use multithreading and I had a question with an exercise that I had come across.
How can I change the bool value of the structure to be true using the function? (I'm bad with pointers). The lock should be in the main function.
The purpose is to lock a thread and prevent others from executing once that state is reached.
pd: I use pthreads
typedef struct Data{
bool used;
}data;
void lock(data *info){
info -> used = true;
}
Use the & operator to get the address of an object. The address is the pointer to the object.
typedef struct Data{
bool used;
}data;
void lock(data *info){
info -> used = true;
}
int main(int argc, char *argv[])
{
data my_struct = {0};
lock(&my_struct);
if (my_struct.used == true)
printf("It is true!\n");
return 0;
}
My understanding of your situation is that you want use pthread locks in your lock function to guard the write operation (info->used = true).
You should create the pthread_mutex_t (Data structure for locking) before using the lock(data *) function. Following is an example.
#include <stdio.h>
#include <stdbool.h>
#include <pthread.h>
typedef struct data
{
bool used;
}data;
pthread_mutex_t spin_lock;
void* lock(void *xxinfo)
{
if (xxinfo != NULL)
{
data *info= (data *)xxinfo;
pthread_mutex_lock(&spin_lock);
info->used = true;
printf("Set the used status\n");
pthread_mutex_unlock(&spin_lock);
}
return NULL;
}
pthread_t threads[2]; // Used it for demonstrating only
int main()
{
int status = 0;
data some_data;
if(0 != pthread_mutex_init(&spin_lock, NULL))
{
printf("Error: Could not initialize the lock\n");
return -1;
}
status = pthread_create(&threads[0], NULL, &lock, &some_data);
if (status != 0)
{
printf("Error: Could not create 0th thread\n");
}
status = pthread_create(&threads[1], NULL, &lock, &some_data);
if (status != 0)
{
printf("Error: Could not create 1st thread\n");
}
pthread_join(threads[0], NULL);
pthread_join(threads[1], NULL);
pthread_mutex_destroy(&spin_lock);
return 0;
}
In this example I am using global spin_lock (which is not a great idea). In your code consider keeping it in an appropriate scope. I have created two threads here for demonstration. To my understanding they don't race at all. I hope this gives you an idea to use pthread locks in your case. You should use lock just for the part of the code that modifies or reads the data.
Note that you should create lock <pthread_mutex_init> before creating the threads. You can also send the locks as parameter to the thread.
Destroy the lock after using it.

Understanding Glib polling system for file descriptors

I'm trying to understand glib polling system. As I understand, polling is a technique to watch file descriptors for events. The function os_host_main_loop_wait runs in a loop. You can see that it calls glib_pollfds_fill, qemu_poll_ns and glib_pollfds_poll. I'm trying to understand what this loop does by calling each of these functions.
static GArray *gpollfds;
static void glib_pollfds_fill(int64_t *cur_timeout)
{
GMainContext *context = g_main_context_default();
int timeout = 0;
int64_t timeout_ns;
int n;
g_main_context_prepare(context, &max_priority);
glib_pollfds_idx = gpollfds->len;
n = glib_n_poll_fds;
do {
GPollFD *pfds;
glib_n_poll_fds = n;
g_array_set_size(gpollfds, glib_pollfds_idx + glib_n_poll_fds);
//Gets current index's address on gpollfds array
pfds = &g_array_index(gpollfds, GPollFD, glib_pollfds_idx);
//Fills gpollfds's each element (pfds) with the file descriptor to be polled
n = g_main_context_query(context, max_priority, &timeout, pfds,
glib_n_poll_fds);
//g_main_context_query returns the number of records actually stored in fds , or,
//if more than n_fds records need to be stored, the number of records that need to be stored.
} while (n != glib_n_poll_fds);
if (timeout < 0) {
timeout_ns = -1;
} else {
timeout_ns = (int64_t)timeout * (int64_t)SCALE_MS;
}
*cur_timeout = qemu_soonest_timeout(timeout_ns, *cur_timeout);
}
static void glib_pollfds_poll(void)
{
GMainContext *context = g_main_context_default();
GPollFD *pfds = &g_array_index(gpollfds, GPollFD, glib_pollfds_idx);
if (g_main_context_check(context, max_priority, pfds, glib_n_poll_fds)) {
g_main_context_dispatch(context);
}
}
static int os_host_main_loop_wait(int64_t timeout)
{
GMainContext *context = g_main_context_default();
int ret;
g_main_context_acquire(context);
glib_pollfds_fill(&timeout);
qemu_mutex_unlock_iothread();
replay_mutex_unlock();
ret = qemu_poll_ns((GPollFD *)gpollfds->data, gpollfds->len, timeout); //RESOLVES TO: g_poll(fds, nfds, qemu_timeout_ns_to_ms(timeout));
replay_mutex_lock();
qemu_mutex_lock_iothread();
glib_pollfds_poll();
g_main_context_release(context);
return ret;
}
So, as I understand, g_poll polls the array of file descriptors with a timeout. What it means? It means it waits for the timeout. If something happens (there's data in the fd to be read for example), I don't know what it does.
Then glib_pollfds_poll calls g_main_context_check and then g_main_context_dispatch.
According to glib's documentation, what g_main_context_check does is:
Passes the results of polling back to the main loop.
What that means?
Then g_main_context_dispatch
dispatches all sources
, which I also don't know what it means.
Entire source can be founde here: https://github.com/qemu/qemu/blob/14e5526b51910efd62cd31cd95b49baca975c83f/util/main-loop.c

Event.h library using event_new() function in C

First off, I am not a programmer, I do electrical engineering. I have done some programming, but would never say that I am a good programmer. This question will probably be downvoted, but that is ok because I have been trying to do this for two months now.
I no nothing about event.h, but I have an existing code that works and uses this. It goes like this (I changed some things to hide information, but the code works):
struct event_base *base;
struct event *read_event;
struct event *signal_event;
typedef struct sample_ctx {
sens_handle_t *sens_handler;
sens_data_t data;
} sample_ctx_t;
// signal handler to break the event loop
void
signal_handler(evutil_socket_t sock, short event, void *user_data)
{
event_base_loopbreak(base);
}
// receive callback
void
sens_recv_cb(evutil_socket_t sock, short event, void *user_data)
{
static int i = 0;
int timeout = 0;
static struct timeval timestamp;
struct timeval timestamp2;
struct timeval diff;
sens_status_t status;
sample_ctx_t *ctx;
ctx = (sample_ctx_t *)user_data;
if (i == 0) {
gettimeofday(&timestamp, NULL);
i = 1;
}
status = sens_read(&ctx->data, ctx->sens_handler);
if ((status == SENS_SUCCESS) &&
!isnan(ctx->data.info1) &&
!isnan(ctx->data.info2) &&
!isnan(ctx->data.info3) &&
!isnan(ctx->data.info4)) {
fprintf(stderr, "%lf %lf %lf %lf\n",
ctx->data.info1,
ctx->data.info2,
ctx->data.info3,
ctx->data.info4);
gettimeofday(&timestamp, NULL);
} else {
gettimeofday(&timestamp2, NULL);
timersub(&timestamp2, &timestamp, &diff);
timeout = diff.tv_sec + (diff.tv_usec / 1000000);
}
}
int main()
{
int fd;
status_t status;
sample_ctx_t ctx;
memset(&ctx, 0, sizeof(ctx));
status = sensor_open(&fd, &ctx.gps_handler);
if (status != V2X_SUCCESS) {
fprintf(stderr, "Open failed ... sensor might not be running\n");
goto deinit_4;
}
base = event_base_new();
if (!base) {
fprintf(stderr, "Failed to create event base\n");
goto deinit_3;
}
// register for the read events
read_event = event_new(base, fd, EV_PERSIST|EV_READ, sens_recv_cb, &ctx);
if (!read_event) {
fprintf(stderr, "Failed to create read event\n");
goto deinit_2;
}
// register for the SIGINT signal on ctrl + c key combo
signal_event = evsignal_new(base, SIGINT, signal_handler, NULL);
if (!signal_event) {
fprintf(stderr, "Failed to create signal event\n");
goto deinit_1;
}
event_add(read_event, NULL);
evsignal_add(signal_event, NULL);
event_base_dispatch(base);
evsignal_del(signal_event);
deinit_1:
event_free(read_event);
deinit_2:
event_base_free(base);
deinit_3:
sensor_close(ctx.sens_handler);
deinit_4:
return 0;
}
This code retrieves data from a sensor and prints it to the screen. It's purpose is pretty simple, but the way it has to be done is what is complicated; for me at least.
Ok, so in the sens_recv_cb function, the ctx->data is printed to the screen, but I need to access that in the main function. The only time this function is called is in the event_new function in main. Is there a way get that data in main? Like lets say I just want to print ctx->data.info1 in main while still printing out everything from before in the sens_recv_cb function.
Is what I want to do possible without changing the entire code?
Because main and sens_recv_cb are asynchronous, you'll need a way to signal between them and a way for the call-back to store the data. You can combine both with a linked list:
struct node {
sample_ctx_t data;
struct node *next;
struct node *previous;
}
struct node *head = NULL;
struct node *tail = NULL;
The event handler adds to the head of the list and the main function removes them from the tail. It's a FIFO. You'll need to use atomic operations when reading/writing data to the list. The links provide what you need to know, and if you search, you'll find lots of example code around here and at other sites. You can probably find an open source, thread-safe linked list implementation on GitHub.
Basically, when the list is empty, there's nothing for main to consume.

Writing a scheduler for a Userspace thread library

I am developing a userspace premptive thread library(fibre) that uses context switching as the base approach. For this I wrote a scheduler. However, its not performing as expected. Can I have any suggestions for this.
The structure of the thread_t used is :
typedef struct thread_t {
int thr_id;
int thr_usrpri;
int thr_cpupri;
int thr_totalcpu;
ucontext_t thr_context;
void * thr_stack;
int thr_stacksize;
struct thread_t *thr_next;
struct thread_t *thr_prev;
} thread_t;
The scheduling function is as follows:
void schedule(void)
{
thread_t *t1, *t2;
thread_t * newthr = NULL;
int newpri = 127;
struct itimerval tm;
ucontext_t dummy;
sigset_t sigt;
t1 = ready_q;
// Select the thread with higest priority
while (t1 != NULL)
{
if (newpri > t1->thr_usrpri + t1->thr_cpupri)
{
newpri = t1->thr_usrpri + t1->thr_cpupri;
newthr = t1;
}
t1 = t1->thr_next;
}
if (newthr == NULL)
{
if (current_thread == NULL)
{
// No more threads? (stop itimer)
tm.it_interval.tv_usec = 0;
tm.it_interval.tv_sec = 0;
tm.it_value.tv_usec = 0; // ZERO Disable
tm.it_value.tv_sec = 0;
setitimer(ITIMER_PROF, &tm, NULL);
}
return;
}
else
{
// TO DO :: Reenabling of signals must be done.
// Switch to new thread
if (current_thread != NULL)
{
t2 = current_thread;
current_thread = newthr;
timeq = 0;
sigemptyset(&sigt);
sigaddset(&sigt, SIGPROF);
sigprocmask(SIG_UNBLOCK, &sigt, NULL);
swapcontext(&(t2->thr_context), &(current_thread->thr_context));
}
else
{
// No current thread? might be terminated
current_thread = newthr;
timeq = 0;
sigemptyset(&sigt);
sigaddset(&sigt, SIGPROF);
sigprocmask(SIG_UNBLOCK, &sigt, NULL);
swapcontext(&(dummy), &(current_thread->thr_context));
}
}
}
It seems that the "ready_q" (head of the list of ready threads?) never changes, so the search of the higest priority thread always finds the first suitable element. If two threads have the same priority, only the first one has a chance to gain the CPU. There are many algorithms you can use, some are based on a dynamic change of the priority, other ones use a sort of rotation inside the ready queue. In your example you could remove the selected thread from its place in the ready queue and put in at the last place (it's a double linked list, so the operation is trivial and quite inexpensive).
Also, I'd suggest you to consider the performace issues due to the linear search in ready_q, since it may be a problem when the number of threads is big. In that case it may be helpful a more sophisticated structure, with different lists of threads for different levels of priority.
Bye!

Deallocating memory in multi-threaded environment

I'm having a hard time figuring out how to manage deallocation of memory in multithreaded environments. Specifically what I'm having a hard time with is using a lock to protect a structure, but when it's time to free the structure, you have to unlock the lock to destroy the lock itself. Which will cause problems if a separate thread is waiting on that same lock that you need to destroy.
I'm trying to come up with a mechanism that has retain counts, and when the object's retain count is 0, it's all freed. I've been trying a number of different things but just can't get it right. As I've been doing this it seems like you can't put the locking mechanism inside of the structure that you need to be able to free and destroy, because that requires you unlock the the lock inside of it, which could allow another thread to proceed if it was blocked in a lock request for that same structure. Which would mean that something undefined is guaranteed to happen - the lock was destroyed, and deallocated so either you get memory access errors, or you lock on undefined behavior..
Would someone mind looking at my code? I was able to put together a sandboxed example that demonstrates what I'm trying without a bunch of files.
http://pastebin.com/SJC86GDp
#include <pthread.h>
#include <unistd.h>
#include <stdlib.h>
#include <stdio.h>
struct xatom {
short rc;
pthread_rwlock_t * rwlck;
};
typedef struct xatom xatom;
struct container {
xatom * atom;
};
typedef struct container container;
#define nr 1
#define nw 2
pthread_t readers[nr];
pthread_t writers[nw];
container * c;
void retain(container * cont);
void release(container ** cont);
short retain_count(container * cont);
void * rth(void * arg) {
short rc;
while(1) {
if(c == NULL) break;
rc = retain_count(c);
}
printf("rth exit!\n");
return NULL;
}
void * wth(void * arg) {
while(1) {
if(c == NULL) break;
release((container **)&c);
}
printf("wth exit!\n");
return NULL;
}
short retain_count(container * cont) {
short rc = 1;
pthread_rwlock_rdlock(cont->atom->rwlck);
printf("got rdlock in retain_count\n");
rc = cont->atom->rc;
pthread_rwlock_unlock(cont->atom->rwlck);
return rc;
}
void retain(container * cont) {
pthread_rwlock_wrlock(cont->atom->rwlck);
printf("got retain write lock\n");
cont->atom->rc++;
pthread_rwlock_unlock(cont->atom->rwlck);
}
void release(container ** cont) {
if(!cont || !(*cont)) return;
container * tmp = *cont;
pthread_rwlock_t ** lock = (pthread_rwlock_t **)&(*cont)->atom->rwlck;
pthread_rwlock_wrlock(*lock);
printf("got release write lock\n");
if(!tmp) {
printf("return 2\n");
pthread_rwlock_unlock(*lock);
if(*lock) {
printf("destroying lock 1\n");
pthread_rwlock_destroy(*lock);
*lock = NULL;
}
return;
}
tmp->atom->rc--;
if(tmp->atom->rc == 0) {
printf("deallocating!\n");
*cont = NULL;
pthread_rwlock_unlock(*lock);
if(pthread_rwlock_trywrlock(*lock) == 0) {
printf("destroying lock 2\n");
pthread_rwlock_destroy(*lock);
*lock = NULL;
}
free(tmp->atom->rwlck);
free(tmp->atom);
free(tmp);
} else {
pthread_rwlock_unlock(*lock);
}
}
container * new_container() {
container * cont = malloc(sizeof(container));
cont->atom = malloc(sizeof(xatom));
cont->atom->rwlck = malloc(sizeof(pthread_rwlock_t));
pthread_rwlock_init(cont->atom->rwlck,NULL);
cont->atom->rc = 1;
return cont;
}
int main(int argc, char ** argv) {
c = new_container();
int i = 0;
int l = 4;
for(i=0;i<l;i++) retain(c);
for(i=0;i<nr;i++) pthread_create(&readers[i],NULL,&rth,NULL);
for(i=0;i<nw;i++) pthread_create(&writers[i],NULL,&wth,NULL);
sleep(2);
for(i=0;i<nr;i++) pthread_join(readers[i],NULL);
for(i=0;i<nw;i++) pthread_join(writers[i],NULL);
return 0;
}
Thanks for any help!
Yes, you can't put the key inside the safe. Your approach with refcount (create object when requested and doesn't exist, delete on last release) is correct. But the lock must exist at least a moment before object is created and after it is destroyed - that is, while it is used. You can't delete it from inside of itself.
OTOH, you don't need countless locks, like one for each object you create. One lock that excludes obtaining and releasing of all objects will not create much performance loss at all. So just create the lock on init and destroy on program end. Otaining/releasing an object should take short enough that lock on variable A blocking access to unrelated variable B should almost never happen. If it happens - you can still introduce one lock per all rarely obtained variables and one per each frequently obtained one.
Also, there seems to be no point for rwlock, plain mutex suffices, and the create/destroy operations MUST exclude each other, not just parallel instances of themselves - so use pthread_create_mutex() family instead.

Resources