Non-simultaneous memory use from multiple threads & caching - c

I have a main thread that maintains an array of pointers to some data. At some point, it spawns a new thread and passes one of the pointers to it. After that moment it does not use that pointer. The thread does its job (possibly modifies the pointed data) and uses a pipe to tell the main thread that it can use that pointer again.
main thread:
struct connection *connections[4];
// initialize connections
while (1)
{
// ...
if (...)
{
pipe(p);
connection->control = p[1];
pthread_create(&thread_id, 0, &handler, connections[i]);
pthread_detach(thread_id);
// ...
}
// ...
if (pipe_data_available[i])
{
// do something with connections[i]
}
// ...
}
other thread:
void *handler(void *arg)
{
struct connection *connection = arg;
// do something with connection
write(connection->control, data, data_size);
return 0;
}
The two threads access the same memory but never at the same time (the main thread does not touch the pointer when the spawned thread uses it).
I have concerns that the main thread may not see the modifications of connections[i] done by handler (due to cache). Can this happen and, if yes, what is the best way to make sure the main thread sees the modifications?

No, there will not be any cache issue between threads. This is managed and cannot happen.
The mechanism is called cache coherence. Even if both threads are running on different cores with different caches, the cache is properly invalidated between cores.
The only possible problem arises when both threads access the memory at the same time. It seems from you question that you avoid this problem by using "pipes", I'm not familiar with pipes but this is more often done with API object called "mutexes".
http://mortoray.com/2010/11/18/cpu-memory-why-do-i-need-a-mutex/

In practice this will work on most architectures you're likely to run this code on. If you want to be standards compliant read: http://pubs.opengroup.org/onlinepubs/9699919799/basedefs/V1_chap04.html#tag_04_11.
That says that you can't be guaranteed memory to be synchronized unless you call one of the pthread locking functions and a few other functions. You can't be guaranteed that the architecture you're running on is cache coherent even though (almost?) everything modern is.

Related

How to use sched_yield() properly?

For an assignment, I need to use sched_yield() to synchronize threads. I understand a mutex lock/conditional variables would be much more effective, but I am not allowed to use those.
The only functions we are allowed to use are sched_yield(), pthread_create(), and pthread_join(). We cannot use mutexes, locks, semaphores, or any type of shared variable.
I know sched_yield() is supposed to relinquish access to the thread so another thread can run. So it should move the thread it executes on to the back of the running queue.
The code below is supposed to print 'abc' in order and then the newline after all three threads have executed. I looped sched_yield() in functions b() and c() because it wasn't working as I expected, but I'm pretty sure all that is doing is delaying the printing because a function is running so many times, not because sched_yield() is working.
The server it needs to run on has 16 CPUs. I saw somewhere that sched_yield() may immediately assign the thread to a new CPU.
Essentially I'm unsure of how, using only sched_yield(), to synchronize these threads given everything I could find and troubleshoot with online.
#include <stdio.h>
#include <pthread.h>
#include <stdlib.h>
#include <sched.h>
void* a(void*);
void* b(void*);
void* c(void*);
int main( void ){
pthread_t a_id, b_id, c_id;
pthread_create(&a_id, NULL, a, NULL);
pthread_create(&b_id, NULL, b, NULL);
pthread_create(&c_id, NULL, c, NULL);
pthread_join(a_id, NULL);
pthread_join(b_id, NULL);
pthread_join(c_id, NULL);
printf("\n");
return 0;
}
void* a(void* ret){
printf("a");
return ret;
}
void* b(void* ret){
for(int i = 0; i < 10; i++){
sched_yield();
}
printf("b");
return ret;
}
void* c(void* ret){
for(int i = 0; i < 100; i++){
sched_yield();
}
printf("c");
return ret;
}
There's 4 cases:
a) the scheduler doesn't use multiplexing (e.g. doesn't use "round robin" but uses "highest priority thread that can run does run", or "earliest deadline first", or ...) and sched_yield() does nothing.
b) the scheduler does use multiplexing in theory, but you have more CPUs than threads so the multiplexing doesn't actually happen, and sched_yield() does nothing. Note: With 16 CPUs and 2 threads, this is likely what you'd get for "default scheduling policy" on an OS like Linux - the sched_yield() just does a "Hrm, no other thread I could use this CPU for, so I guess the calling thread can keep using the same CPU!").
c) the scheduler does use multiplexing and there's more threads than CPUs, but to improve performance (avoid task switches) the scheduler designer decided that sched_yield() does nothing.
d) sched_yield() does cause a task switch (yielding the CPU to some other task), but that is not enough to do any kind of synchronization on its own (e.g. you'd need an atomic variable or something for the actual synchronization - maybe like "while( atomic_variable_not_set_by_other_thread ) { sched_yield(); }). Note that with an atomic variable (introduced in C11) it'd work without sched_yield() - the sched_yield() (if it does anything) merely makes busy waiting less awful/wasteful.
Essentially I'm unsure of how, using only sched_yield(), to
synchronize these threads given everything I could find and
troubleshoot with online.
That would be because sched_yield() is not well suited to the task. As I wrote in comments, sched_yield() is about scheduling, not synchronization. There is a relationship between the two, in the sense that synchronization events affect which threads are eligible to run, but that goes in the wrong direction for your needs.
You are probably looking at the problem from the wrong end. You need each of your threads to wait to execute until it is their turn, and for them to do that, they need some mechanism to convey information among them about whose turn it is. There are several alternatives for that, but if "only sched_yield()" is taken to mean that no library functions other than sched_yield() may be used for that purpose then a shared variable seems the expected choice. The starting point should therefore be how you could use a shared variable to make the threads take turns in the appropriate order.
Flawed starting point
Here is a naive approach that might spring immediately to mind:
/* FLAWED */
void *b(void *data){
char *whose_turn = data;
while (*whose_turn != 'b') {
// nothing?
}
printf("b");
*whose_turn = 'c';
return NULL;
}
That is, the thread executes a busy loop, monitoring the shared variable to await it taking a value signifying that the thread should proceed. When it has done its work, the thread modifies the variable to indicate that the next thread may proceed. But there are several problems with that, among them:
Supposing that at least one other thread writes to the object designated by *whose_turn, the program contains a data race, and therefore its behavior is undefined. As a practical matter, a thread that once entered the body of the loop in that function might loop infinitely, notwithstanding any action by other threads.
Without making additional assumptions about thread scheduling, such as a fairness policy, it is not safe to assume that the thread that will make the needed modification to the shared variable will be scheduled in bounded time.
While a thread is executing the loop in that function, it prevents any other thread from executing on the same core, yet it cannot make progress until some other thread takes action. To the extent that we can assume preemptive thread scheduling, this is an efficiency issue and contributory to (2). However, if we assume neither preemptive thread scheduling nor the threads being scheduled each on a separate core then this is an invitation to deadlock.
Possible improvements
The conventional and most appropriate way to do that in a pthreads program is with the use of a mutex and condition variable. Properly implemented, that resolves the data race (issue 1) and it ensures that other threads get a chance to run (issue 3). If that leaves no other threads eligible to run besides the one that will modify the shared variable then it also addresses issue 2, to the extent that the scheduler is assumed to grant any CPU to the process at all.
But you are forbidden to do that, so what else is available? Well, you could make the shared variable _Atomic. That would resolve the data race, and in practice it would likely be sufficient for the wanted thread ordering. In principle, however, it does not resolve issue 3, and as a practical matter, it does not use sched_yield(). Also, all that busy-looping is wasteful.
But wait! You have a clue in that you are told to use sched_yield(). What could that do for you? Suppose you insert a call to sched_yield() in the body of the busy loop:
/* (A bit) better */
void* b(void *data){
char *whose_turn = data;
while (*whose_turn != 'b') {
sched_yield();
}
printf("b");
*whose_turn = 'c';
return NULL;
}
That resolves issues 2 and 3, explicitly affording the possibility for other threads to run and putting the calling thread at the tail of the scheduler's thread list. Formally, it does not resolve issue 1 because sched_yield() has no documented effect on memory ordering, but in practice, I don't think it can be implemented without a (full) memory barrier. If you are allowed to use atomic objects then combining an atomic shared variable with sched_yield() would tick all three boxes. Even then, however, there would still be a bunch of wasteful busy-looping.
Final remarks
Note well that pthread_join() is a synchronization function, thus, as I understand the task, you may not use it to ensure that the main thread's output is printed last.
Note also that I have not spoken to how the main() function would need to be modified to support the approach I have suggested. Changes would be needed for that, and they are left as an exercise.

Recursive Multithreading in C

I'm creating a function that searches through a directory, prints out files, and when it runs into a folder, a new thread is created to run through that folder and do the same thing.
It makes sense to me to use recursion then as follows:
pthread_t tid[500];
int i = 0;
void *search(void *dir)
{
struct dirent *dp;
DIR *df;
df = opendir(dir)
char curFile[100];
while ((dp = readdir(df)) != NULL)
{
sprintf(curFile, "%s/%s",dir,dp->d_name);
if(isADirectory(curFile))
{
pthread_create(&tid[i], NULL, &search, &curFile);
i++;
}
else
{
printf("%s\n", curFile);
}
}
pthread_join(&tid[i])
return 0;
}
When I do this, however, the function ends up trying to access directories that don't actually exist. Initially I had pthread_join() directly after pthread_create(), which worked, but I don't know if you can count that as multithreading since each thread waits for its worker thread to exit before doing anything.
Is the recursive aspect of this problem even possible, or is it necessary for a new thread to call a different function other than itself?
I haven't dealt with multithreading in a while but if memory serves threads share resources. Which means (in your example) every new thread you make accesses the same variable "i". Now if those threads only read variable "i" there would be no problem whatsoever (every thread keeps reading ... i = 2 wohoo :D).
But issues arise when threads share resources that are being read and written on.
i = 2
i++
// there are many threads running this code
// and "i" is shared among them, are you sure i = 3?
Read, write on shared resources problem is solved with thread synchronization. I recommend reading/googling upon it since it's a pretty unique topic to be solved in one question.
P.S. I pointed out variable "i" in your code but there may be more such resources since your code doesn't display any attempt at thread synchronization.
Consider your while loop. Inside it you have:
sprintf(curFile, "%s/%s",dir,dp->d_name);
and
pthread_create(&tid[i], NULL, &search, &curFile);
So, you mutate the contents of curFile inside the loop, and you also create a thread which you are trying to pass the current contents of curFile. This is a spectacular race hazard - there is no guarantee that the new thread will see the intended contents of curFile, since it may have changed in the meantime. You need to duplicate the string and pass the new thread a copy which won't be mutated by the calling thread. The thread is therefore also going to have be responsible for deallocating the copy, which means either that the search method do exactly that or that you have a second method.
You have another race condition in using i and tid in all threads. As I have suggested in the comment on your question, I think these variables should be method local.
In general I suggest that you read on thread safety and learn about data race hazards before you attempt to use threads. It is usually best to avoid the use of threads unless you really need the extra performance.

PThreads Address Space

Is there any way to force threads to have independent address spaces? I'd like to have many threads running loops using local variables - but it seems they all share the same variables.
for example
for (i = args->start; i < args->end; i++) {
printf("%d\n", i);
if (quickFind(getReverse(array[i]), 0, size - 1)) {
printf("%s\n", array[i]);
//strcpy(array[i], "");
}
}
i seems to be shared across threads.
Threads share the memory space of their parent process. Its their characteristic. If you don't want that to happen you can create a new process, which'll have it's own address space, using fork().
If you do decide to use fork() remember that, on successfully creating a child process, it returns 0 to the child process and the PID of the child process to the parent process.
Short answer: Yes it's possible for each thread to have its own copy of the variable i.
Long answer:
All threads share the same address space and the OS does not provide any protection to prevent one thread from accessing memory used by another. However, the memory can be partitioned so that it will only be accessed by a single thread rather than being shared by all threads.
By default each thread receives its own stack. So if you allocate a variable on the stack then it will typically only be accessed by a single thread. Note that it is possible to pass a pointer to a stack variable from one thread to another, but this is not recommended and could be the source of the sort of problems that you are seeing.
Another way for a thread to receive its own copy of a variable is using thread local storage. This allows each thread to have its own copy of a global variable.
In summary, although threads share an address space they can work on private data. But you need to be careful with how you are sharing data between threads and avoid data races.
Just have each thread call into the function separately. Each invocation of a function gets its own instances of all local variables. If this wasn't true, recursion wouldn't work.
If you want to be really lazy, and make no design changes whatsoever (not really recommended), you can modify the declaration of i to something like __thread int i, so that every thread will have its own instance of that variable.
If you were using OpenMP instead of Posix threads, you could also say #pragma omp threadprivate(i) before the first usage of i.

Multi threading and deadlock

I am making a multi-threaded C program which involves the sharing of a global dynamic integer array between two threads. One thread will keep adding elements to it & the other will independently scan the array & free the scanned elements.
can any one suggest me the way how can I do that because what I am doing is creating deadlock
Please also can any one provide the code for it or a way to resolve this deadlock with full explanation
For the threads I would use pthread. Compile it with -pthread.
#include <pthread.h>
int *array;
// return and argument should be `void *` for pthread
void *addfunction(void *p) {
// add to array
}
// same with this thread
void *scanfunction(void *p) {
// scan in array
}
int main(void) {
// pthread_t variable needed for pthread
pthread_t addfunction_t, scanfunction_t; // names are not important but use the same for pthread_create() and pthread_join()
// start the threads
pthread_create(&addfunction_t, NULL, addfunction, NULL); // the third argument is the function you want to call in this case addfunction()
pthread_create(&scanfunction_t, NULL, scanfunction, NULL); // same for scanfunction()
// wait until the threads are finish leave out to continue while threads are running
pthread_join(addfunction_t, NULL);
pthread_join(scanfunction_t, NULL);
// code after pthread_join will executed if threads aren't running anymore
}
Here is a good example/tutorial for pthread: *klick*
In cases like this, you need to look at the frequency and loading generated by each operation on the array. For instance, if the array is being scanned continually, but only added to once an hour, its worth while finding a really slow, latency-ridden write mechanism that eliminates the need for read locks. Locking up every access with a mutex would be very unsatisfactory in such a case.
Without details of the 'scan' operation, especially duration and frequency, it's not possible to suggest a thread communication strategy for good performance.
Anohter thing ee don't know are consequences of failure - it may not matter if a new addition is queued up for a while before actually being inserted, or it may.
If you want a 'Computer Science 101' answer with, quite possibly, very poor performance, lock up every access to the array with a mutex.
http://www.liblfds.org
Release 6 contains a lock-free queue.
Compiles out of the box for Windows and Linux.

Reading registers or thread local variables of another thread

Is it possible to read the registers or thread local variables of another thread directly, that is, without requiring to go to the kernel? What is the best way to do so?
You can't read the registers, which wouldn't be useful anyway. But reading thread local variables from another thread is easily possible.
Depending on the architecture (e. g. strong memory ordering like on x86_64) you can safely do it even without synchronization, provided that the read value doesn't affect in any way the thread is belongs to. A scenario would be displaying a thread local counter or similar.
Specifically in linux on x86_64 as you tagged, you could to it like that:
// A thread local variable. GCC extension, but since C++11 actually part of C++
__thread int some_tl_var;
// The pointer to thread local. In itself NOT thread local, as it will be
// read from the outside world.
struct thread_data {
int *psome_tl_var;
...
};
// the function started by pthread_create. THe pointer needs to be initialized
// here, and NOT when the storage for the objects used by the thread is allocated
// (otherwise it would point to the thread local of the controlling thread)
void thread_run(void* pdata) {
pdata->psome_tl_var = &some_tl_var;
// Now do some work...
// ...
}
void start_threads() {
...
thread_data other_thread_data[NTHREADS];
for (int i=0; i<NTHREADS; ++i) {
pthread_create(pthreadid, NULL, thread_run, &other_thread_data[i]);
}
// Now you can access each some_tl_var as
int value = *(other_thread_data[i].psome_tl_var);
...
}
I used similar for displaying some statistics about worker threads. It is even easier in C++, if you create objects around your threads, just make the pointer to the thread local a field in your thread class and access is with a member function.
Disclaimer: This is non portable, but it works on x86_64, linux, gcc and may work on other platforms too.
There's no way to do it without involving the kernel, and in fact I don't think it could be meaningful to read them anyway without some sort of synchronization. If you don't want to use ptrace (which is ugly and non-portable) you could instead choose one of the realtime signals to use for a "send me your registers/TLS" message. The rough idea is:
Lock a global mutex for the request.
Store the information on what data you want (e.g. a pthread_key_t or a special value meaning registers) from the thread in global variables.
Signal the target thread with pthread_kill.
In the signal handler (which should have been installed with sigaction and SA_SIGINFO) use the third void * argument to the signal handler (which really points to a ucontext_t) to copy that ucontext_t to the global variable used to communicate back to the requesting thread. This will give it all the register values, and a lot more. Note that TLS is a bit more tricky since pthread_getspecific is not async-signal-safe and technically not legal to run in this context...but it probably works in practice.
The signal handler posts a semaphore (this is the ONLY async-signal-safe synchronization function offered by POSIX) indicating to the requesting thread that it's done, and returns.
The requesting thread finishes by waiting on the semaphore, then reads the data and unlocks the request mutex.
Note that this will involve at least 1 transition to kernelspace (pthread_kill) in the requesting thread (and maybe another in sem_wait), and 1-3 in the target thread (1 for returning from the signal handler, one for entering the signal handler if it was not already sleeping in kernelspace, and possibly one for sem_post). Still it's probably faster than mucking around with ptrace which is not designed for high-performance usage...

Resources