Managing a mutex in shared memory - c

I'm attempting the simple task of creating a mutex in shared memory. I have the following code to declare a section of shared memory, and attach it to an int*.
int *mutex;
// allocate shared memory for mutex
if ((shmid2 = shmget(IPC_PRIVATE, 4, IPC_CREAT | 0666)) < 0) {
printf("Could not allocate shared memory for mutex: %d.\n", errno);
exit(errno);
}
if ((mutex = shmat(shmid2, NULL, 0)) == (int*)-1) {
printf("Could not attach shared memory for mutex: %d\n", errno);
exit(errno);
}
// set the mutex to one
mutex[0] = 1;
Now, I attempt to define a critical section, surrounded by locking and unlocking the mutex. (Inside of one of many child processes).
while (*mutex == 0) ;
mutex[0] = 0;
// critical section
...
// end critical section
mutex[0] = 1;
However, I'm finding that this technique does not work, and two child processes can enter the critical section simultaneously, without much issue (it happens very often). So I'm wondering what I can do to fix this, without the use of pthreads.

Your options are:
Use POSIX semaphores instead of trying to implement them yourself with shared-memory spinlocks. See the documentation for semop (2) and related functions for details.
If you must use shared-memory semaphores, you will need to use an atomic compare/exchange. Otherwise, two processes can both simultaneously see *mutex == 0 and set it to 1 at the same time, without "noticing" that the other process is doing the same thing.

Related

Named semaphores instead of mutex - readers writers problem without multithreading

My goal is to solve Readers Writers[1] problem but using only isolated processes. One process is for reader one for the writer, I should use named semaphores, so that it is possible to start subsequent reader and writers at any time - also I can't use shared memory - pure synchronization.
More info:
Provide implementation of 2 programs implementing a reader and
a writer, so that it is possible to dynamically start new processes while complying with the restrictions.
Pay attention to the properties of concurrent processing: safety and liveness.
Consider also whether you program is deadlock free.
EDIT: problem is separated to 3 files
File 1. Reader:
int main(){
sem_t *mutex;
sem_t *write;
int count=0;
mutex = sem_open("/mutex", O_CREAT, 0600, 1);
write = sem_open("/write", O_CREAT, 0600, 1);
do{
sem_wait(mutex);
count++;
if (count==1){
sem_wait(write);
}
sem_post(mutex);
printf("Critical section in readers\n");
sem_wait(mutex);
count--;
if(count==0)
sem_post(write);
sem_post(mutex);
}while(1);
}
File 2. Writer
int main(){
sem_t *write;
write = sem_open("/write", O_CREAT, 0600, 1);
do{
sem_wait(write);
printf("Critical section in writer\n");
sem_post(write);
}while(1);
return 0;
}
File 3. Deleting semaphores
int main(){
sem_unlink("/mutex");
sem_unlink("/write");
printf("Semaphores deleted \n");
return 0;
}
Problem:
when I run reader or writer with gcc -pthread file_name.c I don't
get any result, as If the code wasn't doing anything - the process is
running, the cursor is blinking but nothing happens.
[1]: READERS and WRITERS : The reading room has capacity of n
readers. Readers come to the reading room, allocate a single place, and occupy it for some time, then leave. After some time they come again and the procedure repeats. The reading room is also used by writers. However, a writer can only work when the reading room is empty, i.e. there must be no other reader nor writer. The writer occupy the room for some time, then leaves, and comes back after a while
My goal is to solve Readers Writers problem but using only isolated processes. One process is for reader one for the writer, I should use named semaphores, so that it is possible to start subsequent reader and writers at any time - also I can't use shared memory - pure synchronization.
Judging from this limited description, you can probably solve this problem by using named pipes.
I can't use shared memory
The code treats global variables counter and cnt as if they are shared between processes. They are not, each process gets a copy of those with the same value, the changes to these variables are not seen by other processes.
To use functions sem_wait and sem_post link with linker option -pthread.
You mentioned that you have to use "isolated processes", but as far as I know threads are not processes. to create a new process you have to use fork().
Differnces as mentioned here (full link with difference-table):
A process is an active program i.e. a program that is under execution.
It is more than the program code as it includes the program counter,
process stack, registers, program code etc. Compared to this, the
program code is only the text section.
A thread is a lightweight process that can be managed independently by
a scheduler. It improves the application performance using
parallelism. A thread shares information like data segment, code
segment, files etc. with its peer threads while it contains its own
registers, stack, counter etc.
in simple words - each process can have in it multiple threads ("lightweight processes").
I think you have to use fork() to create new Processes because of the word "Process" that you mentioned. also, you mentioned that you need 2 processes (one for the reader and one for the writer) so you have to fork() twice and manage these 2 processes. You can read about fork() here.
edit (semaphore implementation):
int initsem(key_t semkey, int initval)
{
int status = 0, semid;
union semun {/* should to be declared according to C standards */
int val;
struct semid_ds *stat;
ushort *array;
} ctl_arg;
if ((semid = semget(semkey, 1, SEMPERM | IPC_CREAT | IPC_EXCL)) == -1) {
if (errno == EEXIST)
semid = semget(semkey, 1, 0);
}
else { /* if created */
ctl_arg.val = initval; /* set semaphore value to the initial value*/
status = semctl(semid, 0, SETVAL, ctl_arg);
}
if (semid == -1 || status == -1) { /* failure */
perror("initsem failed");
return(-1);
}
else return semid;
}
int sem_wait(int semid)
{
struct sembuf p_buf;
p_buf.sem_num = 0;
p_buf.sem_op = -1;
p_buf.sem_flg = SEM_UNDO;
if (semop(semid, &p_buf, 1) == -1) {
perror("p(semid) failed");
exit(1);
}
else return 0;
}
int sem_post(int semid)
{
struct sembuf v_buf;
v_buf.sem_num = 0;
v_buf.sem_op = 1;
v_buf.sem_flg = SEM_UNDO;
if (semop(semid, &v_buf, 1) == -1) {
perror("v(semid) failed"); exit(1);
}
else return 0;
}

how synchronization is done in shared memory data linux c

I was asked a question in interview, how synchronization is done in shared memory. I told Take a struct. In that you have a flag and a data. Test the flag and change the data.
I took the following program from internet as below-. Can anyone tell if there is better way of synchronization in shared memory
#define NOT_READY -1
#define FILLED 0
#define TAKEN 1
struct Memory {
int status;
int data[4];
};
Assume that the server and client are in the current directory. The server uses ftok() to generate a key and uses it for requesting a shared memory. Before the shared memory is filled with data, status is set to NOT_READY. After the shared memory is filled, the server sets status to FILLED. Then, the server waits until status becomes TAKEN, meaning that the client has taken the data.
The following is the server program. Click here to download a copy of this server program server.c.
#include <stdio.h>
#include <stdlib.h>
#include <sys/types.h>
#include <sys/ipc.h>
#include <sys/shm.h>
#include "shm-02.h"
void main(int argc, char *argv[])
{
key_t ShmKEY;
int ShmID;
struct Memory *ShmPTR;
if (argc != 5) {
printf("Use: %s #1 #2 #3 #4\n", argv[0]);
exit(1);
}
ShmKEY = ftok(".", 'x');
ShmID = shmget(ShmKEY, sizeof(struct Memory), IPC_CREAT | 0666);
if (ShmID < 0) {
printf("*** shmget error (server) ***\n");
exit(1);
}
printf("Server has received a shared memory of four integers...\n");
ShmPTR = (struct Memory *) shmat(ShmID, NULL, 0);
if ((int) ShmPTR == -1) {
printf("*** shmat error (server) ***\n");
exit(1);
}
printf("Server has attached the shared memory...\n");
ShmPTR->status = NOT_READY;
ShmPTR->data[0] = atoi(argv[1]);
ShmPTR->data[1] = atoi(argv[2]);
ShmPTR->data[2] = atoi(argv[3]);
ShmPTR->data[3] = atoi(argv[4]);
printf("Server has filled %d %d %d %d to shared memory...\n",
ShmPTR->data[0], ShmPTR->data[1],
ShmPTR->data[2], ShmPTR->data[3]);
ShmPTR->status = FILLED;
printf("Please start the client in another window...\n");
while (ShmPTR->status != TAKEN)
sleep(1);
printf("Server has detected the completion of its child...\n");
shmdt((void *) ShmPTR);
printf("Server has detached its shared memory...\n");
shmctl(ShmID, IPC_RMID, NULL);
printf("Server has removed its shared memory...\n");
printf("Server exits...\n");
exit(0);
}
The client part is similar to the server. It waits until status is FILLED. Then, the clients retrieves the data and sets status to TAKEN, informing the server that data have been taken. The following is the client program. Click here to download a copy of this server program client.c.
#include <stdio.h>
#include <stdlib.h>
#include <sys/types.h>
#include <sys/ipc.h>
#include <sys/shm.h>
#include "shm-02.h"
void main(void)
{
key_t ShmKEY;
int ShmID;
struct Memory *ShmPTR;
ShmKEY = ftok(".", 'x');
ShmID = shmget(ShmKEY, sizeof(struct Memory), 0666);
if (ShmID < 0) {
printf("*** shmget error (client) ***\n");
exit(1);
}
printf(" Client has received a shared memory of four integers...\n");
ShmPTR = (struct Memory *) shmat(ShmID, NULL, 0);
if ((int) ShmPTR == -1) {
printf("*** shmat error (client) ***\n");
exit(1);
}
printf(" Client has attached the shared memory...\n");
while (ShmPTR->status != FILLED)
;
printf(" Client found the data is ready...\n");
printf(" Client found %d %d %d %d in shared memory...\n",
ShmPTR->data[0], ShmPTR->data[1],
ShmPTR->data[2], ShmPTR->data[3]);
ShmPTR->status = TAKEN;
printf(" Client has informed server data have been taken...\n");
shmdt((void *) ShmPTR);
printf(" Client has detached its shared memory...\n");
printf(" Client exits...\n");
exit(0);
}
Can anyone tell if there is better way of synchronization in shared memory?
Definitely, yes. I would say the way you waste CPU cycles in busy-wait (while (ShmPTR->status != FILLED) ;) is already a fatal mistake.
Note that POSIX shared memory has a much more sensible interface than the old SysV does. See man 7 shm_overview for details.
There are two distinct purposes for synchronization primitives:
Data synchronization
To protect data against concurrent modification, and to ensure each reader gets a consistent view of the data, there are three basic approaches:
Atomic access
Atomic access requires hardware support, and is typically only supported for native machine word sized units (32 or 64 bits).
Mutexes and condition variables
Mutexes are mutually exclusive locks. The idea is to grab the mutex before examining or modifying the value.
Condition variables are basically unordered queues for threads or processes to wait for a "condition". POSIX pthreads library includes facilities for atomically releasing a mutex and waiting on a condition variable. This makes waiting for a dataset to change trivial to implement, if each modifier signals or broadcasts on the condition variable after each modification.
Read-write locks.
An rwlock is a primitive that allows any number of concurrent "read locks", but only one "write lock" to be held on it at any time. The idea is that each reader grabs a read lock before examining the data, and each writer a write lock before modifying it. This works best when the data is more often examined than modified, and a mechanism for waiting for a change to occur is not needed.
Process synchronization
There are situations where threads and processes should wait (block) until some event has occurred. There are two most common primitives used for this:
Semaphores
A POSIX semaphore is basically an opaque nonnegative counter you initialize to whatever (zero or positive value, within the limits set by the implementation).
sem_wait() checks the counter. If it is nonzero, it decrements the counter and continues execution. If the counter is zero, it blocks until another thread/process calls sem_post() on the counter.
sem_post() increments the counter. It is one of the rare synchronization primitives you can use in a signal handler.
Barriers
A barrier is a synchronization primitive that blocks until there is a specific number of threads or processes blocking in the barrier, then releases them all at once.
Linux does not implement POSIX barriers (pthread_barrier_init(), pthread_barrier_wait(), pthread_barrier_destroy()), but you can easily achieve the same using a mutex, a counter (counting the number of additional processes needed to release all waiters), and a condition variable.
There are many better ways of implementing the said server-client pair (where shared memory contains a flag and some data).
For data integrity and change management, a mutex and one or two condition variables should be used. (If the server may change the data at any time, one condition variable (changed) suffices; if the server must wait until a client has read the data before modifying it, two are needed (changed and observed).)
Here is an example structure you could use to describe the shared memory segment:
#ifndef SHARED_H
#define SHARED_H
#include <stdlib.h>
#include <unistd.h>
#include <pthread.h>
struct shared_data {
/* Shared memory data */
};
struct shared {
pthread_mutex_t lock;
pthread_cond_t change; /* Condition variable for clients waiting on data changes */
pthread_cond_t observe; /* Condition variable for server waiting on data observations */
unsigned long changed; /* Number of times data has been changed */
unsigned long observed; /* Number of times current data has been observed */
struct shared_data data;
};
/* Return the size of 'struct shared', rounded up to a multiple of page size. */
static inline size_t shared_size_page_aligned(void)
{
size_t page, size;
page = (size_t)sysconf(_SC_PAGESIZE);
size = sizeof (struct shared) + page - 1;
return size - (size % page);
}
#endif /* SHARED_H */
The changed and observed fields are counters, that help avoid any time-of-check-to-time-of-use race windows. It is important that before the shared memory is accessed the thread does pthread_mutex_lock(&(shared_memory->lock)), to ensure a consistent view of the data.
If a thread/process examines the data, it should do
shared_memory->observed++;
pthread_cond_broadcast(&(shared_memory->observe));
pthread_mutex_unlock(&(shared_memory->lock));
and if a thread/process modifies the data, it should do
shared_memory->modified++;
shared_memory->observed = 0;
pthread_cond_broadcast(&(shared_memory->change));
pthread_mutex_unlock(&(shared_memory->lock));
to notify any waiters and update the counters, when unlocking the mutex.

Synchronized access to data in shared memory between two processes [duplicate]

This question already has answers here:
How do I synchronize access to shared memory in LynxOS/POSIX?
(4 answers)
Closed 6 years ago.
I have two processes that have data in shared memory. This data is going to be updated by both of these process. I was looking for some locking mechanism between two processes. With threads it was easy to have a shared mutex lock. In my case, I tried to save the mutex variable in the shared memory, which then will be used by both processes for locking. This didn't work though. How do I share a mutex between two processes. Some say mutexes cannot be shared, use semaphores. Why mutex cannot be shared but semaphores can be?
It is possible, you have to use the flag PTHREAD_PROCESS_SHARED:
pthread_mutexattr_t mattr;
pthread_mutexattr_init(&mattr);
pthread_mutexattr_setpshared(&mattr, PTHREAD_PROCESS_SHARED);
// Init the shared mem barrier
if ( (rv = pthread_mutex_init(&nshared, &mattr)) != 0 ) {
fprintf(stderr, "Failed to initiliaze the shared mutex.\n");
return rv;
}
Where the variable nshared is mapped in shared memory.
Take a look at this documentation. Also, keep in mind that the default value for the mutex is to not share it among processes.
also, take a look at these posts post1 post2
Bonus code to chek the status of the mutex:
void showPshared(pthread_mutexattr_t *mta) {
int rc;
int pshared;
printf("Check pshared attribute\n");
rc = pthread_mutexattr_getpshared(mta, &pshared);
printf("The pshared attributed is: ");
switch (pshared) {
case PTHREAD_PROCESS_PRIVATE:
printf("PTHREAD_PROCESS_PRIVATE\n");
break;
case PTHREAD_PROCESS_SHARED:
printf("PTHREAD_PROCESS_SHARED\n");
break;
default :
printf("! pshared Error !\n");
exit(1);
}
return;
}
I don't remember were I took this piece of code ... found it! here is the source of hal knowledge.

Busy waiting and shared memory

I am currently trying to implement a single C program that creates a shared memory area for a given process then forks this process into one child, makes the child to write into a given position of the shared memory and has the father wait for until the child writes in that position. I used a simple busy waiting approach, suffering the parent process to wait until the child end his writing using a while loop. The problem is that it only works when I introduce some delay in that loop. Anyone has any idea why is this so?
Code:
int shmid;
int *shmptr;
int i, j, ret;
key_t key = SHM_KEY;
// Create shared memory segment
if ((shmid = shmget(key, SHM_SIZE, IPC_CREAT | 0600)) < 0)
{
printf("shmget error: %s\n", strerror(errno));
return -1;
}
// Attach shared memory segment
if ((shmptr = shmat(shmid, 0, 0)) == (void *) -1)
{
puts("shmat error");
return -1;
}
shmptr[6] = '%';
ret = fork();
if (ret > 0)
{/*parent*/
/*here is the loop that implements the busy waiting approach*/
while (shmptr[6] != '^') {
sleep(1);
}
for (i = 0; i < 7; i++) printf("%c", shmptr[i]);
puts("");
int status = 0;
wait(&status);
}
else
{/*child*/
shmptr[0] = 's';
shmptr[1] = 'h';
shmptr[2] = 'a';
shmptr[3] = 'r';
shmptr[4] = 'e';
shmptr[5] = 'd';
/*tell parent process ithas finished its writing*/
shmptr[6] = '^';
exit(0);
}
Volatile (see earlier comment will probably only work in a single-core scenario). Assuming you are running on a CPU with more than one cores, you will need to treat access of every location in the shared memory region atomically. If using a C++11 compliant compiler, each location of the region would need to be assumed to be of type std::atomic<int>.
Since you are probably using C, not C++, and using GCC, consider using the atomic builtins GCC Atomic Builtins.
So, your
shmptr[0] = 's';
statement should be replaced with an atomic set operator:
_sync_val_compare_and_swap(&shmptr[0], 's');
And do the equivalent for all of the sets. Then, do the equivalent in the loop to check for the return value (which will be the character you want).
The semaphore in another answer might work, but, there are no guarantees that the other locations will have made it through the CPUs write-post circuitry, through the cache controller on the source, and so on through the receiving CPU's controller, especially if the addresses being accessed span cache lines.
I would also recommend doing a sleep(0) or yield() of some sort to allow other programs to get time slices on the core that the main program is running on, otherwise, you will waste CPU resources.
You want to synchonise access to the share memory (SHM).
This, for can example, be done by using a semphore.
Before fork()ing off the child call sem_open().
Make the parent wait on sem_wait() prior to reading the SHM.
Have the child call sem_post() when done writing the SHM.
I guess that what is happening is that the child is terminating too quickly.
You might use a non-hanging waitpid(2) and add it in your loop:
/*here is the loop that implements the busy waiting approach*/
int status= 0;
while (shmptr[6] != '^') {
if (waitpid(ret, &status, WNOHANG) == ret) break;
sleep(1);
}
However, as I commented, busy waiting is always bad in Linux user-space programs (at the very least it is stressing your system). Read sem_overview(7), or alternatively, set up a pipe(7) or an eventfd(2) or a signalfd(2) and poll(2) it. Or set up a SIGCHLD signal handler (read carefully signal(7)) which just sets a volatile sigatomic_t flag to be tested in your loop.
You should also declare volatile int*shmptr; because the compiler might have optimized its use.

fork and exec with respect to locking shared memory - C

So I'm just wondering if I had a simple task to do in concurrency, how would I do this with multiple processes using fork() and exec() from a parent process, while locking some aspects of the parent process' memory (so that they don't overwrite each other), but making it available to those processes later?
I know I can do this with POSIX threads with their mutex locks, but what's the process equivalent to that? Is there a way to "lock" shared memory amongst threads? And then would I have to "wait()" for the other threads to finish those locked areas of memory before the other threads could access it?
If you're using the pthreads implementation of mutexes, you would still use them to synchronize between processes... you would place them in shared memory. Initializing a pthread mutex in shared memory addresses this.
You can also use a simple pipe to synchronize access -- pre-fill the pipe with a token and require a successful read of the token to permit resource access. Then write the token back into the pipe in order to release the resource.
First: if you call exec and it succeeds then your process image will be overwritten. You will loose any shared memory and you will need to set it up with your favourite shared memory paradigm (e.g. posix shared memory shm_open).
If you fork then any memory that was mapped shared will remain shared. Means you can place your favourite mutex (e.g. pthread_mutex_t, sem_t) into it and use it with the standard functions that go with it.
void * shared_memory = mmap(
NULL // anywhere
, sysconf(_SC_PAGESIZE) // mmap only works in chunks of pages
// typically 0x1000
, PROT_READ | PROT_WRITE // read-write
, MAP_SHARED // shared
| MAP_ANONYMOUS // anonymous, non-file backed
#ifdef MAP_HASSEMAPHORE
| MAP_HASSEMAPHORE // OS X requires this flag in case you
// intend to have semaphores in that segment
#endif
, -1 // no file backing
, 0
);
if (shared_memory == MAP_FAILED) {
perror("mmap");
abort();
}
// we use that memory to place a mutex tehre
pthread_mutex_t * mutex = shared_memory;
pthread_mutex_init(mutex, NULL);
pid_t pid = fork();
if (pid < 0) {
perror("fork");
abort();
}
if (!pid) {
// child goes here
// use the mutex here
} else {
// parent goes here
// use the mutex here
}

Resources