Accessing static functions/variable from different threads - c

I use a third side library written in C. It is designed to run as singleton and contains plenty of static functions, variables and user interface. I need to be able to run it with multiple instances so they do not interfere with each other. For example if one threads sets static variable
static int index = 0;
index = 10;
the second thread still sees index = 0.
I am not sure if it is even possible to implement.

What you are asking is not possible.
Let's assume for pedagogical purposes that you are on a unix machine.
Any process (such as the executable ./a.out) has the following Memory layout :
Text
Data
Initialized
Uninitialized
Heap
Stack
When you create a thread, then it shares all these memory segments except the Stack section(basically each thread gets a new stack pointer).
Moreover the static variables are stored in the Data segment (in your case Initialized data segment) which is a shared memory segment, hence when one thread changes it, it changes for all other threads as well.
So threads only have the following things local to themself
Stack pointer
Program Counter
registers
Image source : llnl.gov
Hope it helped :-).

Related

Shared memory in C with LinkedList

I have to develop a mini-shell in C. In this project we have to deal with local variables and environment variables. So we can have two process that shared environment variables, a variable create in the child can be seen in the father and inversely.
My teacher says:
The environment variables are stored in a shared memory area that is created by the first copy of your shell that runs and is initialized with all the variables defined in the envp array. The last copy of your shell that runs at any given time must destroy that space. This shared memory area is to be managed as a memory by subdivision. The shared memory area is a concurrent access memory area with multiple possible simultaneous reads but only 1 write possible at a given time. Implementation must give priority to writing.
So we need shared memory with linked list who contains:
the name of variable (char*)
the int return by shmget()
and a char* return by shmat(), the value of variable
But when we create a environment variable in the father there is not in the child.
So I think this method is not correct, how can we represents this problem ?
Maybe not use a linked list ?
Thank you.
TF.
I understand !
My teacher say me to create a shared memory space with fixed size ! It's simple now.
Thank you everyone.

How to read the variable value from RAM?

I've written a program using dynamic memory allocation. I do not use the free function to free the memory, still at the address, the variable's value is present there.
Now I want to reuse this value and I want to see all the variables' values that are present in RAM from another process.
Is it possible?
#include<stdio.h>
#include<stdlib.h>
void main(){
int *fptr;
fptr=(int *)malloc(sizeof(int));
*fptr=4;
printf("%d\t%u",*fptr,fptr);
while(1){
//this is become infinite loop
}
}
and i want to another program to read the value of a because it is still in memory because main function is infinite. how can do this?
This question shows misconceptions on at least two topics.
The first is virtual address spaces and memory protection, as already addressed by RobertL in his answer. On a modern operating system, you just can't access memory belonging to another process (even if you knew the physical address, which you don't because all user space processes work on addresses in their private virtual address space). The result of trying to access something not mapped into your address space will be a segmentation fault. Read more on Wikipedia.
The second is the scope of the C standard. It doesn't know about processes. C is defined in terms of an abstract machine executing your program (and only this program). Scopes and lifetimes of variables are defined and the respective maximum is the global scope and a static storage duration. So yes, your variable will continue to live as long as your program runs, but it's scope will be this program.
When you understand that, you see: even on a platform using a single global address space and no memory protection at all, you could never access the variable of another program in terms of the C standard. You could probably pass a pointer value somehow to your other program and use that, maybe it would work, but it would be undefined behavior.
That's why operating systems provide means for inter process communication like shared memory (which comes close to what you seem to want) and pipes.
When you return from main(), the process frees all acquired resources, so you can't. Have a look on this.
First of all, when you close your process the memory you allocated with it will be freed entirely. You can circumvent this by having 1 process of you write to a file (like a .dat file) and the other read from it. There are other ways, too.
But generally speaking in normal cases, when your process terminates the existing memory will be freed.
If you try accessing memory from another process from within your process, you will most likely get a segmentation fault, as most operating systems protect processes from messing with each other's memory.
It depends on the operating system. Most modern multi-tasking operating systems protect processes from each other. Hardware is setup to disallow processes from seeing other processes memory. On Linux and Unix for example, to communicate between programs in memory you will need to use operating system services for "inter-process" communication, such as shared memory, semaphores, or pipes.

Initialize an int in a struct for shared memory

I have an int that keeps track of words in a queue, but I am working with shared memory that should persist across multiple executions. Therefore, I can't simply state
int words = 0;
as a global variable because it will be overwritten each time I run the program. My struct currently looks like this
typedef struct {
/* List of words stored in the FIFO. */
Word list[ MAX_WORDS ];
int words;
} FIFO;
I only need to initialize 'words' to 0 for the first run, and after that the value should persist through shared memory, but I'm not sure how to do this without it being reset to 0 each run.
Any help would be awesome, thanks!
When you create a new shared memory area, it's initialised to zeros automatically. That's the case for both shmget on Linux and CreateFileMapping on Windows. Likely the same on other systems, but you'll have to search in the docs. In practice that means as long as you have proper locking scheme implemented, your app will see only 2 states of the shared memory - either all zeros (you're the first one to open it), or already initialised (another instance opened the shared memory before).
I'm not sure you really want shared memory though. If by "should persist across multiple executions" you mean executions of multiple processes at the same time, then this answer applies. But if you want to run your app, then shut it down, then run it again and have the same FIFO available, then you need to just write it into some file, or an embedded/external database.

CUDA shared memory not faster than global?

Hi i have kernel function, where i need to compare bytes. Area where i want to search is divided into blocks, so array of 4k bytes is divided to 4k/256 = 16 blocks. Each thread in block reads array on idx and compare it with another array, where is what i want to search. I've done this by two ways:
1.Compare data in global memory, but often threads in block need to read the same address.
2.Copy data from global memory to shared memory, and compare bytes in shared memory in the same way as mentioned above. Still problem with same address read.
Copy to shared memory looks like this:
myArray[idx] = global[someIndex-idx];
whatToSearch[idx] = global[someIndex+idx];
Rest of the code is the same. Only operations on data in example 2 are performed in shared arrays.
But first option is about 10% faster, than that with the shared memory, why?? Thank you for explanations.
If you are only using the data once and there is no data reuse between different threads in a block, then using shared memory will actually be slower. The reason is that when you copy data from global memory to shared, it still counts as a global transaction. Reads are faster when you read from shared memory, but it doesn't matter because you already had to read the memory once from global, and the second step of reading from shared memory is just an extra step that doesn't provide anything of value.
So, the key point is that using shared memory is only useful when you need to access the same data more than once (whether from the same thread, or from different threads in the same block).
You are using shared memory to save on accesses to global memory, but each thread is still making two accesses to global memory, so it won't be faster. The speed drop is probably because the threads that access the same location in global memory within a block try to read it into the same location in shared memory, and this needs to be serialized.
I'm not sure of exactly what you are doing from the code you posted, but you should ensure that the number of times global is read from and written to, aggregated across all the threads in a block, is significantly lower when you use shared memory. Otherwise you won't see a performance improvement.

C : how pthread dataspecific works?

I'm not sure about how pthread dataspecific works : considering the next code (found on the web), does this means i can create for example 5 threads in the main, have a call to func in only some of them (let's say 2) those threads would have the data 'key' set to something (ptr = malloc(OBJECT_SIZE) ) and the other threads would have the same key existing but with a NULL value?
static pthread_key_t key;
static pthread_once_t key_once = PTHREAD_ONCE_INIT;
static void
make_key()
{
(void) pthread_key_create(&key, NULL);
}
func()
{
void *ptr;
(void) pthread_once(&key_once, make_key);
if ((ptr = pthread_getspecific(key)) == NULL) {
ptr = malloc(OBJECT_SIZE);
...
(void) pthread_setspecific(key, ptr);
}
...
}
Some explanation on how dataspecific works and how it may have been implemented in pthread (simple way) would be appreciated!
Your reasoning is correct. These calls are for thread-specific data. They're a way of giving each thread a "global" area where it can store what it needs, but only if it needs it.
The key is shared among all threads, since it's created with pthread_once() the first time it's needed, but the value given to that key is different for each thread (unless it remains set to NULL). By having the value a void* to a memory block, a thread that needs thread-specific data can allocate it and save the address for later use. And threads that don't call a routine that needs thread-specific data never waste memory since it's never allocated for them.
The one area where I have used them is to make a standard C library thread-safe. The strtok() function (as opposed to a thread-safe strtok_r() which was considered an abomination when we were doing this) in an implementation I was involved in used almost this exact same code the first time it was called, to allocate some memory which would be used by strtok() for storing information for subsequent calls. These subsequent calls would retrieve the thread-specific data to continue tokenizing the string without interfering with other threads doing the exact same thing.
It meant users of the library didn't have to worry about cross-talk between threads - they still had to ensure a single thread didn't call the function until the last one had finished but that's the same as with single-threaded code.
It allowed us to give a 'proper' C environment to each thread running in our system without the usual "you have to call these special non-standard re-entrant routines" limitations that other vendors imposed on their users.
As for implementation, from what I remember of DCE user-mode threads (which I think were the precursor to the current pthreads), each thread had a single structure which stored things like instruction pointers, stack pointers, register contents and so on. It was a very simple matter to add one pointer to this structure to achieve very powerful functionality with minimal cost. The pointer pointed to a array (linked list in some implementations) of key/pointer pairs so each thread could have multiple keys (e.g., one for strtok(), one for rand()).
The answer to your first question is yes. In simple terms, it allows each thread to allocate and save its own data. This is roughly equivalent to w/o each thread simply allocating and passing around its own data structure. The API saves you the trouble of passing the thread-local structure to all subfunctions, and allows you to look it up on demand instead.
The implementation really doesn't matter all that much (it may vary per-OS), as long as the results are the same.
You can think of it as a two-level hashmap. The key specifies which thread-local "variable" you want to access, and the second level might perform a thread-id lookup to request the per-thread value.

Resources