I'm working on a program that stores a struct in a shared memory segment so I can use the data in the main process and multiple thread simultaneously. I'm using also mutex semaphores (pthread_mutex_lock() / pthread_mutex_unlock()) to guarantee mutual exclusion between processes that want to access that shared memory segment.
Firstly, a thread locks the struct and fills it with information:
void * client_thread(void *argc) {
thread_input *t_input = (thread_input *) argc; //struct that stores data usefull for the thread such as the shm segment
shm_data *shmdata; //<-- shared struct
shmdata = shmat(t_input->memid, 0, 0);
//...
pthread_mutex_lock(t_input->mtx);
shmdata->station = malloc((strlen(data.station)+1) * sizeof(char));
strcpy(shmdata->station, data.station);
shmdata->temperature = data.temperature;
shmdata->humidity = data.humidity;
shmdata->pressure = data.pressure;
shmdata->precipitation = data.precipitation;
pthread_mutex_unlock(t_input->mtx);
//...
}
This works fine. Then, using a synchronization semaphore, main process tries to access that shared memory segment of a struct, for example trying to print first variable. But it gets blocked in that line:
Main
//...
shm_data *shmdata; //<-- shared struct
shmdata = shmat(memid, 0, 0);
pthread_mutex_lock(&mtx);
write(1, "Station:\n", strlen("Station::")); //Prints fine
write(1, shmdata->station, strlen(shmdata->station)); //<-- Program blocks here
write(1, "Done!\n", strlen("Done!!")); //Program does not reach this point or any further
pthread_mutex_unlock(&mtx);
//...
I have tested that the memory id in both thread and main process it's the same and I guarantee that thread code will execute first and then the main process. Why does the program get blocked there?
Related
I was asked a question in interview, how synchronization is done in shared memory. I told Take a struct. In that you have a flag and a data. Test the flag and change the data.
I took the following program from internet as below-. Can anyone tell if there is better way of synchronization in shared memory
#define NOT_READY -1
#define FILLED 0
#define TAKEN 1
struct Memory {
int status;
int data[4];
};
Assume that the server and client are in the current directory. The server uses ftok() to generate a key and uses it for requesting a shared memory. Before the shared memory is filled with data, status is set to NOT_READY. After the shared memory is filled, the server sets status to FILLED. Then, the server waits until status becomes TAKEN, meaning that the client has taken the data.
The following is the server program. Click here to download a copy of this server program server.c.
#include <stdio.h>
#include <stdlib.h>
#include <sys/types.h>
#include <sys/ipc.h>
#include <sys/shm.h>
#include "shm-02.h"
void main(int argc, char *argv[])
{
key_t ShmKEY;
int ShmID;
struct Memory *ShmPTR;
if (argc != 5) {
printf("Use: %s #1 #2 #3 #4\n", argv[0]);
exit(1);
}
ShmKEY = ftok(".", 'x');
ShmID = shmget(ShmKEY, sizeof(struct Memory), IPC_CREAT | 0666);
if (ShmID < 0) {
printf("*** shmget error (server) ***\n");
exit(1);
}
printf("Server has received a shared memory of four integers...\n");
ShmPTR = (struct Memory *) shmat(ShmID, NULL, 0);
if ((int) ShmPTR == -1) {
printf("*** shmat error (server) ***\n");
exit(1);
}
printf("Server has attached the shared memory...\n");
ShmPTR->status = NOT_READY;
ShmPTR->data[0] = atoi(argv[1]);
ShmPTR->data[1] = atoi(argv[2]);
ShmPTR->data[2] = atoi(argv[3]);
ShmPTR->data[3] = atoi(argv[4]);
printf("Server has filled %d %d %d %d to shared memory...\n",
ShmPTR->data[0], ShmPTR->data[1],
ShmPTR->data[2], ShmPTR->data[3]);
ShmPTR->status = FILLED;
printf("Please start the client in another window...\n");
while (ShmPTR->status != TAKEN)
sleep(1);
printf("Server has detected the completion of its child...\n");
shmdt((void *) ShmPTR);
printf("Server has detached its shared memory...\n");
shmctl(ShmID, IPC_RMID, NULL);
printf("Server has removed its shared memory...\n");
printf("Server exits...\n");
exit(0);
}
The client part is similar to the server. It waits until status is FILLED. Then, the clients retrieves the data and sets status to TAKEN, informing the server that data have been taken. The following is the client program. Click here to download a copy of this server program client.c.
#include <stdio.h>
#include <stdlib.h>
#include <sys/types.h>
#include <sys/ipc.h>
#include <sys/shm.h>
#include "shm-02.h"
void main(void)
{
key_t ShmKEY;
int ShmID;
struct Memory *ShmPTR;
ShmKEY = ftok(".", 'x');
ShmID = shmget(ShmKEY, sizeof(struct Memory), 0666);
if (ShmID < 0) {
printf("*** shmget error (client) ***\n");
exit(1);
}
printf(" Client has received a shared memory of four integers...\n");
ShmPTR = (struct Memory *) shmat(ShmID, NULL, 0);
if ((int) ShmPTR == -1) {
printf("*** shmat error (client) ***\n");
exit(1);
}
printf(" Client has attached the shared memory...\n");
while (ShmPTR->status != FILLED)
;
printf(" Client found the data is ready...\n");
printf(" Client found %d %d %d %d in shared memory...\n",
ShmPTR->data[0], ShmPTR->data[1],
ShmPTR->data[2], ShmPTR->data[3]);
ShmPTR->status = TAKEN;
printf(" Client has informed server data have been taken...\n");
shmdt((void *) ShmPTR);
printf(" Client has detached its shared memory...\n");
printf(" Client exits...\n");
exit(0);
}
Can anyone tell if there is better way of synchronization in shared memory?
Definitely, yes. I would say the way you waste CPU cycles in busy-wait (while (ShmPTR->status != FILLED) ;) is already a fatal mistake.
Note that POSIX shared memory has a much more sensible interface than the old SysV does. See man 7 shm_overview for details.
There are two distinct purposes for synchronization primitives:
Data synchronization
To protect data against concurrent modification, and to ensure each reader gets a consistent view of the data, there are three basic approaches:
Atomic access
Atomic access requires hardware support, and is typically only supported for native machine word sized units (32 or 64 bits).
Mutexes and condition variables
Mutexes are mutually exclusive locks. The idea is to grab the mutex before examining or modifying the value.
Condition variables are basically unordered queues for threads or processes to wait for a "condition". POSIX pthreads library includes facilities for atomically releasing a mutex and waiting on a condition variable. This makes waiting for a dataset to change trivial to implement, if each modifier signals or broadcasts on the condition variable after each modification.
Read-write locks.
An rwlock is a primitive that allows any number of concurrent "read locks", but only one "write lock" to be held on it at any time. The idea is that each reader grabs a read lock before examining the data, and each writer a write lock before modifying it. This works best when the data is more often examined than modified, and a mechanism for waiting for a change to occur is not needed.
Process synchronization
There are situations where threads and processes should wait (block) until some event has occurred. There are two most common primitives used for this:
Semaphores
A POSIX semaphore is basically an opaque nonnegative counter you initialize to whatever (zero or positive value, within the limits set by the implementation).
sem_wait() checks the counter. If it is nonzero, it decrements the counter and continues execution. If the counter is zero, it blocks until another thread/process calls sem_post() on the counter.
sem_post() increments the counter. It is one of the rare synchronization primitives you can use in a signal handler.
Barriers
A barrier is a synchronization primitive that blocks until there is a specific number of threads or processes blocking in the barrier, then releases them all at once.
Linux does not implement POSIX barriers (pthread_barrier_init(), pthread_barrier_wait(), pthread_barrier_destroy()), but you can easily achieve the same using a mutex, a counter (counting the number of additional processes needed to release all waiters), and a condition variable.
There are many better ways of implementing the said server-client pair (where shared memory contains a flag and some data).
For data integrity and change management, a mutex and one or two condition variables should be used. (If the server may change the data at any time, one condition variable (changed) suffices; if the server must wait until a client has read the data before modifying it, two are needed (changed and observed).)
Here is an example structure you could use to describe the shared memory segment:
#ifndef SHARED_H
#define SHARED_H
#include <stdlib.h>
#include <unistd.h>
#include <pthread.h>
struct shared_data {
/* Shared memory data */
};
struct shared {
pthread_mutex_t lock;
pthread_cond_t change; /* Condition variable for clients waiting on data changes */
pthread_cond_t observe; /* Condition variable for server waiting on data observations */
unsigned long changed; /* Number of times data has been changed */
unsigned long observed; /* Number of times current data has been observed */
struct shared_data data;
};
/* Return the size of 'struct shared', rounded up to a multiple of page size. */
static inline size_t shared_size_page_aligned(void)
{
size_t page, size;
page = (size_t)sysconf(_SC_PAGESIZE);
size = sizeof (struct shared) + page - 1;
return size - (size % page);
}
#endif /* SHARED_H */
The changed and observed fields are counters, that help avoid any time-of-check-to-time-of-use race windows. It is important that before the shared memory is accessed the thread does pthread_mutex_lock(&(shared_memory->lock)), to ensure a consistent view of the data.
If a thread/process examines the data, it should do
shared_memory->observed++;
pthread_cond_broadcast(&(shared_memory->observe));
pthread_mutex_unlock(&(shared_memory->lock));
and if a thread/process modifies the data, it should do
shared_memory->modified++;
shared_memory->observed = 0;
pthread_cond_broadcast(&(shared_memory->change));
pthread_mutex_unlock(&(shared_memory->lock));
to notify any waiters and update the counters, when unlocking the mutex.
So I have a very high data acquisition rate of 16MB/s. I am reading 4MB of data into a buffer from a device file and then processing it. However, this method of writing then reading was to slow for the project. I would like to implement a double buffer in C.
In order to simplify my idea of the double buffer I decided not to include reading from a device file for simplicity. What I have created is a C program that spawns two separate threads readThread and writeThread. I made readThread call my swap function that swaps the pointers of the buffers.
This implementation is terrible because I am using shared memory outside of the Mutex. I am actually slightly embarrassed to post it, but it will at least give you an idea of what I am trying to do. However, I can not seem to come up with a practical way of reading and writing to separate buffers at the same time then calling a swap once both threads finished writing and reading.
Can someone please tell me if its possible to implement double buffering and give me an idea of how to use signals to control when the threads read and write?
Note that readToBuff (dumb name I know) and writeToBuff aren't actually doing anything at present they have blank functions.
Here is my code:
#include <stdlib.h>
#include <stdio.h>
#include <pthread.h>
pthread_t writeThread;
pthread_t readThread;
pthread_mutex_t buffer_mutex;
char buff1[4], buff2[4];
struct mutex_shared {
int stillReading, stillWriting, run_not_over;
char *writeBuff, *readBuff;
} SHARED;
void *writeToBuff(void *idk) {
while(!SHARED.run_not_over) {
SHARED.stillWriting = 1;
for(int i = 0; i < 4; i++) {
}
SHARED.stillWriting = 0;
while(SHARED.stillReading){};
}
printf("hello from write\n");
return NULL;
}
void *readToBuff(void *idk) {
while(!SHARED.run_not_over) {
SHARED.stillReading = 1;
for(int i = 0; i < 4; i++) {
}
while(SHARED.stillWriting){};
swap(writeThread,readThread);
}
printf("hello from read");
return NULL;
}
void swap(char **a, char **b){
pthread_mutex_lock(&buffer_mutex);
printf("in swap\n");
char *temp = *a;
*a = *b;
*b = temp;
SHARED.stillReading = 0;
//SHARED.stillWriting = 0;
pthread_mutex_unlock(&buffer_mutex);
}
int main() {
SHARED.writeBuff = buff1;
SHARED.readBuff = buff2;
printf("buff1 address %p\n", (void*) &buff1);
printf("buff2 address %p\n", (void*) &buff2);
printf("writeBuff address its pointing to %p\n", SHARED.writeBuff);
printf("readBuff address its pointing to %p\n", SHARED.readBuff);
swap(&SHARED.writeBuff,&SHARED.readBuff);
printf("writeBuff address its pointing to %p\n", SHARED.writeBuff);
printf("readBuff address its pointing to %p\n", SHARED.readBuff);
pthread_mutex_init(&buffer_mutex,NULL);
printf("Creating Write Thread\n");
if (pthread_create(&writeThread, NULL, writeToBuff, NULL)) {
printf("failed to create thread\n");
return 1;
}
printf("Thread created\n");
printf("Creating Read Thread\n");
if(pthread_create(&readThread, NULL, readToBuff, NULL)) {
printf("failed to create thread\n");
return 1;
}
printf("Thread created\n");
pthread_join(writeThread, NULL);
pthread_join(readThread, NULL);
exit(0);
}
Using a pair of semaphores seems like it would be easier. Each thread has it's own semaphore to indicate that a buffer is ready to be read into or written from, and each thread has it's own index into a circular array of structures, each containing a pointer to buffer and buffer size. For double buffering, the circular array only contains two structures.
The initial state sets the read thread's semaphore count to 2, the read index to the first buffer, the write threads semaphore count to 0, and the write index to the first buffer. The write thread is then created which will immediately wait on its semaphore.
The read thread waits for non-zero semaphore count (sem_wait) on its semaphore, reads into a buffer, sets the buffer size, increments the write threads semaphore count (sem_post) and "advances" it's index to the circular array of structures.
The write thread waits for non-zero semaphore count (sem_wait) on its semaphore, writes from a buffer (using the size set by read thread), increments the read threads semaphore count (sem_post) and "advances" it's index to the circular array of structures.
When reading is complete, the read thread sets a structure's buffer size to zero to indicate the end of the read chain, then waits for the write thread to "return" all buffers.
The circular array of structures could include more than just 2 structures, allowing for more nesting of data.
I've had to use something similar for high speed data capture, but in this case, the input stream was faster than a single hard drive, so two hard drives were used, and the output alternated between two write threads. One write thread operated on the "even" buffers, the other on the "odd" buffers.
In the case of Windows, with it's WaitForMultipleObjects() (something that just about every operating system other than Posix has), each thread can use a mutex and a semaphore, along with its own linked list based message queue. The mutex controls queue ownership for queue updates, the semaphore indicates number of items pending on a queue. For retrieving a message, a single atomic WaitForMultipleObjects() waits for a mutex and a non-zero semaphore count, and when both have occurred, decrements the semaphore count and unblocks the thread. A message sender, just needs a WaitForObject() on the mutex to update another threads message queue, then posts (releases) the threads semaphore and releases the mutex. This eliminates any priority issues between threads.
I have a situation where I need to access a variable in shared memory across threads. The variable is initially defined and continually updated, in existing code in numerous places. I'm adding code that will allow this existing code base to be run as a background thread, but I need to read data from this shared variable.
My question is do I need to add a mutex to the existing code base every time it's updated? Or can I just add a mutex to the new code for the times when I'm reading the data. I created the following small test case below that seems to workout.
#include <pthread.h>
#include <stdio.h>
#include <stdlib.h>
typedef struct my_data {
int shared;
}MY_DATA;
MY_DATA data;
pthread_mutex_t lock;
void *background(void *x_void_ptr)
{
int i = 0;
int sleep_time;
while(i < 10)
{
data.shared++;
printf("BACK thread, Data = %d\n", data.shared);
sleep_time = rand()%5;
sleep(sleep_time);
i++;
}
return NULL;
}
int main()
{
int sleep_time;
pthread_t bg_thread;
if(pthread_create(&bg_thread, NULL, background, NULL)) {
fprintf(stderr, "Error creating thread\n");
return 1;
}
MY_DATA *p_data = &data;
int i = 0;
while(i < 10)
{
pthread_mutex_lock(&lock);
printf("FOR thread, Data = %d\n", p_data->shared);
pthread_mutex_unlock(&lock);
sleep_time = rand()%5;
sleep(sleep_time);
i++;
}
// Finish up
if(pthread_join(bg_thread, NULL)) {
fprintf(stderr, "Error joining thread\n");
return 2;
}
return 0;
}
Output:
FOR thread, Data = 0
BACK thread, Data = 1
BACK thread, Data = 2
FOR thread, Data = 2
FOR thread, Data = 2
BACK thread, Data = 3
BACK thread, Data = 4
BACK thread, Data = 5
FOR thread, Data = 5
BACK thread, Data = 6
BACK thread, Data = 7
BACK thread, Data = 8
FOR thread, Data = 8
FOR thread, Data = 8
BACK thread, Data = 9
FOR thread, Data = 9
BACK thread, Data = 10
FOR thread, Data = 10
FOR thread, Data = 10
FOR thread, Data = 10
After running this a number of times it looks like there is no data corruption (i.e foreground is reading the correct data), but my instincts are saying that I need to have the mutex in both the foreground and background code.
Transferring material from my comment into an answer.
Note that all global memory in a process (except thread-local storage, and the local variables in functions) is shared between threads. Shared memory is a term for memory shared between processes.
Whether the memory is being accessed by threads or processes, you need to ensure that access is properly managed (e.g. with mutexes) whenever there's more than one thread of execution that could be accessing the same memory at the same time. These days, it is seldom safe to assume you have a single core at work in a machine, so potential concurrent access is the norm.
No, your memory won't be corrupted, reading doesn't have any influence on that.
But you'll read in an inconsistent state, which is just as bad.
We are developing a simple application in Linux for Desktop computer. The scheme is very simple and as below:
A main process which deals with the external world interface (which gives some data periodically) and spawns and keeps track of child processes.
Various child processes which process this data and report back to main process from time to time.
The data coming in from external world interface is in the chunks of around 240 KBytes and 1 chunk comes in at the rate of roughly once per milliseconds. All the child processes use and work on the same data i.e. the complete chunk is sent to every child process when it arrives.
The number of child processes is not fixed, can vary from 4 to 20. The scheme adopted for inter process communication is as follows:
Shared memory capable of holding multiple data chunks is created by all the processes using common key and using shmget() and shmat() APIs. Main process writes to this shared memory and all the child processes read from this memory.
To inform the child processes that new data chunk have arrived I have used pthread_cond_broadcast() API. The conditional variable and the corresponding mutex used for this purpose reside in a small separate shared memory and are initialized to default attributes in the main process.
So whenever new data chunk arrives (roughly once per 1 ms) main process calls pthread_cond_broadcast() and the child processes which are waiting on pthread_cond_wait() read this data from the shared memory and process it. The problem I am facing is:
Depending on the processor load, sometimes the pthread signals are getting lost i.e. either delivered to only some or none of the waiting child processes. This severly affects the data processing as the data continuity is lost (and the child process is not even aware of it). Processing time of the child process is average 300 micro seconds and this application is running on a multicore processor.
To pin down the problem I even created a dummy application with 1 main process and several dummy child processes who does nothing but to wait on pthread_cond_wait(). From main process I called pthread_cond_broadcast every 1 ms and a count was incremented and printed, similarly every time a pthread signal was received in a child process another count was incremented and printed. When I ran this test program I found after some time the receiver's count began to lag the sender's count and the gap between their count went on increasing. Am I right in my understanding that this was due to some pthread signals not delivered? Are there any other fast plus secure IPC mechanisms.
I even tried the same thing using internet domain sockets using UDP datagrams in broadcast (only for the synchronization purpose while the data was still read from the shared memory). But here also I noticed as the number of child processes increased the synchronization signals were getting lost. Please give your thoughts and ideas.
Consider the test program as below:
#include <stdio.h>
#include <unistd.h>
#include <sys/types.h>
#include <pthread.h>
#define SHM_KEY 3579
#define NumOfChildProc 20
int Packets_Tx = 0, Packets_Rx = 0;
void ChildProc(void)
{
/* Create the shared memory with same key as SHM_KEY
* Declare the condition and mutex and assign them the shared memory
address */
while(1)
{
pthread_mutex_lock(PTMutex);
pthread_cond_wait(PTCond, PTMutex);
pthread_mutex_unlock(PTMutex);
printf("From CP [%d]: Packets Received = %d\n",getpid(), Packets_Rx++);
}
}
int main(void)
{
int shmid, i;
pid_t l_pid;
char* SigBlock;
pthread_condattr_t condattr;
pthread_mutexattr_t mutexattr;
pthread_cond_t* PTCond;
pthread_mutex_t* PTMutex;
shmid = shmget(SHM_KEY, (sizeof(pthread_cond_t) + sizeof(pthread_mutex_t)), IPC_CREAT | 0666);
if(shmid < 0)
{
perror("shmget");
}
SigBlock = (char *)shmat(shmid, NULL, 0);
if(SigBlock == (char *) -1)
{
perror("shmat");
}
PTCond = (pthread_cond_t*) SigBlock;
PTMutex = (pthread_mutex_t*)(SigBlock + sizeof(pthread_cond_t));
pthread_condattr_init(&condattr);
pthread_condattr_setpshared(&condattr, PTHREAD_PROCESS_SHARED);
pthread_cond_init(PTCond, &condattr);
pthread_condattr_destroy(&condattr);
pthread_mutexattr_init(&mutexattr);
pthread_mutexattr_setpshared(&mutexattr, PTHREAD_PROCESS_SHARED);
pthread_mutex_init(PTMutex, &mutexattr);
pthread_mutexattr_destroy(&mutexattr);
for(i=0; i<NumOfChildProc; i++)
{
l_pid = fork();
if(l_pid == 0)
ChildProc();
}
sleep(1);
while(1)
{
/* Send pthread broadcast and increment the packets count */
printf("From Main Process : Packets Sent = %d\n", Packets_Tx++);
pthread_cond_broadcast(PTCond);
usleep(1000);
}
}
pthread_cond_broadcast() signals do not get "lost". Every thread which is waiting in a pthread_cond_wait() call at the point where the broadcast is sent will be woken - your problem is almost certainly that every thread is not waiting in the pthread_cond_wait() call at the point where pthead_cond_broadcast() is called - some threads may still be processing that last lot of data when the broadcast is sent, in which case they will "miss" the broadcast.
A pthread conditional variable should always be paired with a suitable condition (or predicate) over shared state, and a thread should only call pthread_cond_wait() after checking the state of that predicate.
For example, in your case you might have a shared variable which is the block number of the latest chunk to have arrived. In the main thread, it would increment this (while holding the mutex) before broadcasting the condition variable:
pthread_mutex_lock(&lock);
latest_block++;
pthread_cond_broadcast(&cond);
pthread_mutex_unlock(&lock);
In the worker threads, each thread would keep track of the last block it has processed in a local variable, and check to see if another block has arrived before calling pthread_cond_wait():
pthread_mutex_lock(&lock);
while (latest_block <= my_last_block)
pthread_cond_wait(&cond, &lock);
pthread_mutex_unlock(&lock);
This will cause the worker to wait until the main thread has incremented latest_block to be greater than my_last_block (the last block that was processed by this worker).
Your example test code has the same problem - sooner or later the main thread will call pthread_cond_broadcast() when a child thread is locking or unlocking the mutex, or inside the printf() call.
A version of your example code, updated to use the fix I mentioned, does not show this problem:
#include <stdio.h>
#include <unistd.h>
#include <sys/types.h>
#include <sys/ipc.h>
#include <sys/shm.h>
#include <pthread.h>
#define SHM_KEY 9753
#define NumOfChildProc 20
int Packets_Tx = 0, Packets_Rx = 0;
struct {
pthread_cond_t PTCond;
pthread_mutex_t PTMutex;
int last_packet;
} *shared_data;
void ChildProc(void)
{
int my_last_packet = 0;
/* Create the shared memory with same key as SHM_KEY
* Declare the condition and mutex and assign them the shared memory
address */
while(1)
{
pthread_mutex_lock(&shared_data->PTMutex);
while (shared_data->last_packet <= my_last_packet)
pthread_cond_wait(&shared_data->PTCond, &shared_data->PTMutex);
pthread_mutex_unlock(&shared_data->PTMutex);
printf("From CP [%d]: Packets Received = %d\n",getpid(), Packets_Rx++);
my_last_packet++;
}
}
int main(void)
{
int shmid, i;
pid_t l_pid;
pthread_condattr_t condattr;
pthread_mutexattr_t mutexattr;
shmid = shmget(SHM_KEY, sizeof *shared_data, IPC_CREAT | 0666);
if(shmid < 0)
{
perror("shmget");
}
shared_data = shmat(shmid, NULL, 0);
if(shared_data == (void *) -1)
{
perror("shmat");
}
pthread_condattr_init(&condattr);
pthread_condattr_setpshared(&condattr, PTHREAD_PROCESS_SHARED);
pthread_cond_init(&shared_data->PTCond, &condattr);
pthread_condattr_destroy(&condattr);
pthread_mutexattr_init(&mutexattr);
pthread_mutexattr_setpshared(&mutexattr, PTHREAD_PROCESS_SHARED);
pthread_mutex_init(&shared_data->PTMutex, &mutexattr);
pthread_mutexattr_destroy(&mutexattr);
shared_data->last_packet = 0;
for(i=0; i<NumOfChildProc; i++)
{
l_pid = fork();
if(l_pid == 0)
ChildProc();
}
sleep(1);
while(1)
{
/* Send pthread broadcast and increment the packets count */
printf("From Main Process : Packets Sent = %d\n", Packets_Tx++);
pthread_mutex_lock(&shared_data->PTMutex);
shared_data->last_packet++;
pthread_cond_broadcast(&shared_data->PTCond);
pthread_mutex_unlock(&shared_data->PTMutex);
usleep(30);
}
}
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 5 years ago.
Improve this question
How memory is shared in following scenarios?
Between Parent and child Processes
Between two irrelevant Processes
In which part of the physical memory does the shared memory (or) any other IPC used for communicating between processes exists?
Here it the program with explanation of Memory management between Parent and Child Process..
/*
SHARING MEMORY BETWEEN PROCESSES
In this example, we show how two processes can share a common
portion of the memory. Recall that when a process forks, the
new child process has an identical copy of the variables of
the parent process. After fork the parent and child can update
their own copies of the variables in their own way, since they
dont actually share the variable. Here we show how they can
share memory, so that when one updates it, the other can see
the change.
*/
#include <stdio.h>
#include <sys/ipc.h>
#include <sys/shm.h> /* This file is necessary for using shared
memory constructs
*/
main()
{
int shmid. status;
int *a, *b;
int i;
/*
The operating system keeps track of the set of shared memory
segments. In order to acquire shared memory, we must first
request the shared memory from the OS using the shmget()
system call. The second parameter specifies the number of
bytes of memory requested. shmget() returns a shared memory
identifier (SHMID) which is an integer. Refer to the online
man pages for details on the other two parameters of shmget()
*/
shmid = shmget(IPC_PRIVATE, 2*sizeof(int), 0777|IPC_CREAT);
/* We request an array of two integers */
/*
After forking, the parent and child must "attach" the shared
memory to its local data segment. This is done by the shmat()
system call. shmat() takes the SHMID of the shared memory
segment as input parameter and returns the address at which
the segment has been attached. Thus shmat() returns a char
pointer.
*/
if (fork() == 0) {
/* Child Process */
/* shmat() returns a char pointer which is typecast here
to int and the address is stored in the int pointer b. */
b = (int *) shmat(shmid, 0, 0);
for( i=0; i< 10; i++) {
sleep(1);
printf("\t\t\t Child reads: %d,%d\n",b[0],b[1]);
}
/* each process should "detach" itself from the
shared memory after it is used */
shmdt(b);
}
else {
/* Parent Process */
/* shmat() returns a char pointer which is typecast here
to int and the address is stored in the int pointer a.
Thus the memory locations a[0] and a[1] of the parent
are the same as the memory locations b[0] and b[1] of
the parent, since the memory is shared.
*/
a = (int *) shmat(shmid, 0, 0);
a[0] = 0; a[1] = 1;
for( i=0; i< 10; i++) {
sleep(1);
a[0] = a[0] + a[1];
a[1] = a[0] + a[1];
printf("Parent writes: %d,%d\n",a[0],a[1]);
}
wait(&status);
/* each process should "detach" itself from the
shared memory after it is used */
shmdt(a);
/* Child has exited, so parent process should delete
the cretaed shared memory. Unlike attach and detach,
which is to be done for each process separately,
deleting the shared memory has to be done by only
one process after making sure that noone else
will be using it
*/
shmctl(shmid, IPC_RMID, 0);
}
}
/*
POINTS TO NOTE:
In this case we find that the child reads all the values written
by the parent. Also the child does not print the same values
again.
1. Modify the sleep in the child process to sleep(2). What
happens now?
2. Restore the sleep in the child process to sleep(1) and modify
the sleep in the parent process to sleep(2). What happens now?
Thus we see that when the writer is faster than the reader, then
the reader may miss some of the values written into the shared
memory. Similarly, when the reader is faster than the writer, then
the reader may read the same values more than once. Perfect
i /*
SHARING MEMORY BETWEEN PROCESSES
In this example, we show how two processes can share a common
portion of the memory. Recall that when a process forks, the
new child process has an identical copy of the variables of
the parent process. After fork the parent and child can update
their own copies of the variables in their own way, since they
dont actually share the variable. Here we show how they can
share memory, so that when one updates it, the other can see
the change.
*/
#include <stdio.h>
#include <sys/ipc.h>
#include <sys/shm.h> /* This file is necessary for using shared
memory constructs
*/
main()
{
int shmid. status;
int *a, *b;
int i;
/*
The operating system keeps track of the set of shared memory
segments. In order to acquire shared memory, we must first
request the shared memory from the OS using the shmget()
system call. The second parameter specifies the number of
bytes of memory requested. shmget() returns a shared memory
identifier (SHMID) which is an integer. Refer to the online
man pages for details on the other two parameters of shmget()
*/
shmid = shmget(IPC_PRIVATE, 2*sizeof(int), 0777|IPC_CREAT);
/* We request an array of two integers */
/*
After forking, the parent and child must "attach" the shared
memory to its local data segment. This is done by the shmat()
system call. shmat() takes the SHMID of the shared memory
segment as input parameter and returns the address at which
the segment has been attached. Thus shmat() returns a char
pointer.
*/
if (fork() == 0) {
/* Child Process */
/* shmat() returns a char pointer which is typecast here
to int and the address is stored in the int pointer b. */
b = (int *) shmat(shmid, 0, 0);
for( i=0; i< 10; i++) {
sleep(1);
printf("\t\t\t Child reads: %d,%d\n",b[0],b[1]);
}
/* each process should "detach" itself from the
shared memory after it is used */
shmdt(b);
}
else {
/* Parent Process */
/* shmat() returns a char pointer which is typecast here
to int and the address is stored in the int pointer a.
Thus the memory locations a[0] and a[1] of the parent
are the same as the memory locations b[0] and b[1] of
the parent, since the memory is shared.
*/
a = (int *) shmat(shmid, 0, 0);
a[0] = 0; a[1] = 1;
for( i=0; i< 10; i++) {
sleep(1);
a[0] = a[0] + a[1];
a[1] = a[0] + a[1];
printf("Parent writes: %d,%d\n",a[0],a[1]);
}
wait(&status);
/* each process should "detach" itself from the
shared memory after it is used */
shmdt(a);
/* Child has exited, so parent process should delete
the cretaed shared memory. Unlike attach and detach,
which is to be done for each process separately,
deleting the shared memory has to be done by only
one process after making sure that noone else
will be using it
*/
shmctl(shmid, IPC_RMID, 0);
}
}
/*
POINTS TO NOTE:
In this case we find that the child reads all the values written
by the parent. Also the child does not print the same values
again.
1. Modify the sleep in the child process to sleep(2). What
happens now?
2. Restore the sleep in the child process to sleep(1) and modify
the sleep in the parent process to sleep(2). What happens now?
Thus we see that when the writer is faster than the reader, then
the reader may miss some of the values written into the shared
memory. Similarly, when the reader is faster than the writer, then
the reader may read the same values more than once. Perfect
inter-process communication requires synchronization between the
reader and the writer. You can use semaphores to do this.
Further note that "sleep" is not a synchronization construct.
We use "sleep" to model some amount of computation which may
exist in the process in a real world application.
Also, we have called the different shared memory related
functions such as shmget, shmat, shmdt, and shmctl, assuming
that they always succeed and never fail. This is done to
keep this proram simple. In practice, you should always check for
the return values from this function and exit if there is
an error.
*/nter-process communication requires synchronization between the
reader and the writer. You can use semaphores to do this.
Further note that "sleep" is not a synchronization construct.
We use "sleep" to model some amount of computation which may
exist in the process in a real world application.
Also, we have called the different shared memory related
functions such as shmget, shmat, shmdt, and shmctl, assuming
that they always succeed and never fail. This is done to
keep this proram simple. In practice, you should always check for
the return values from this function and exit if there is
an error.
*/