POSIX shared memory - How many copies of memory are there - c

Situation:
If process a & b each use mmap() to create a shared memory mapping, with the same shared memory object /shm-a as backed file.
My guess:
I originally thought there is only 1 copy of memory, which processes write/read on.
But later I think there are actually 3 copy of them, right? Each process has 1 copy which is created by mmap(), and the 3rd copy is the shared memory object, which is used to sync between process, but I am not sure.
The questions are:
Then how many copy of memory there are? 1 or n+1 (where n is process count)
If it's n+1, won't this be a kind waste of memory? And is it proper for process to read/write to the shared memory object via its fd directly?

Then how many copy of memory there are? 1 or n+1 (where n is process count)
There is only one copy of the shared memory.
The same physical memory is mapped into different processes. But it may be mapped to different addresses.
And is it proper for process to read/write to the shared memory object via its fd directly?
Yes it is. That is, in fact, the purpose of shared memory. What one process writes into shared memory can be read by the other process. This a very fast form of IPC. But you do have to be careful in how you use it. In particular, you need to worry about concurrent access, and sharing pointers in shared memory.

Related

How to store nodes of a list into shared memory

I am trying to make many clients communicate with each other via many terminals.I have forks inside my program and I create pipes so the clients can read/write from/to other clients.Because I create many processes I need shared memory to store some infos and in particular i want to store nodes that are created from each kid.How can I do this?
This is my struct:
typedef struct client{
char *numofclient;
struct client *nextclient;
}client;
Before forking anything create a shared memory area using mmap. Read the man page and use the shared flags. If on Windows it's different so look up VirtualAlloc and of course you can't fork.
You'll need a memory allocator for your shared memory. That can be super easy: just increment a char pointer for an allocation and never free anything. Or it can be as complex as you want. You may be able to find a library online.
If you need a mutex create one in the shared memory area and be sure to use the flags for a shared mutex.
Since you are forking you can use pointers because the shared memory will remain mapped in place in each process copy. Otherwise you'd need to use offsets from the map start.
I think you can make the systemV shared memory using shmget, read man pages.
You can decide an upper limit of how many Process are going to be created and provide that much size accordingly to shmget.
So, whenever you child process wants to store list it can just attach to the shared memory and append its data in shared memory.

Is it possible to have persistent memory allocated to a process?

Suppose process A allocates some memory in which it stores some data. Let's say it is a set of key -> value pairs. It is expensive to create these key -> value pairs. So, I want to allocate memory such that even if process A dies for some reason when it is restarted it should be able to access this data in RAM. I understand I can store the data to a file and read it back when A restarts. I want to explore if there are other methods available if the amount of memory available is not an issue.
Is there a mechanism (api) to allocate memory such that it is pinned in memory until freed. If not, is it possible to achieve this by employing shared memory techniques. For example 2 process allocate and share the same memory and so even if one process dies the memory is not freed because the other is still alive. When the dead process is restarted can it regain access to that shared memory? If yes how?
Finally if this is not possible I am curious why the kernel does not provide such a mechanism?
Yes. What you're looking for is called Shared Memory segments. Run man 7 shm_overview to get the overview but basically it's:
shm_open - allocate or re-open a shared memory segment (POSIX)
shmget - allocate a shared memory segment (System V)
shmat - attaches to a shared memory segment (System V)
shmdt - detaches from a shared memory segment (System V)
shm_unlink - remove the shared memory segment (POSIX)
If you have a copy of "Advanced UNIX Programming" 2nd edition the chapter "Advanced Interprocess Communication" cover this in more detail in sections "System V Shared Memory" and "POSIX Shared Memory".
Also, this feature predates Linux, it's been around since 1983 assuming the dates on https://en.wikipedia.org/wiki/UNIX_System_V are correct.

Memory Management for Mapped Data in Shared Memory Segments

I'm working on a project in C that uses shared memory for IPC on a Linux system. However, I'm a little bit confused about memory management in these segments. I'm using the POSIX API for this project.
I understand how to create the shared segments, and that these persist until a reboot if you fail to properly remove them with shm_unlink(). Additionally, I understand how to do the actually mapping & unmapping with mmap and munmap respectively. However, the usage of these operations and how it affects the stored data in these shared segments is confusing me.
Here is what I'm trying to properly understand:
Lets say I create a segment using shm_open() with the O_CREAT flag. This gives me a file descriptor that I've named msfd in the below example. Now I have a struct that I map into that address space with the following:
mystruct* ms = (mystruct*)mmap(NULL, sizeof(mystruct), PROT_READ | PROT_WRITE, MAP_SHARED, msfd, 0);
//set the elements of the struct here using ms->element = X as usual
Part 1)
Here's where my confusion beings. Lets say that this process is now done accessing that location since it was just setting data for another process to read. Do I still call munmap()?
I want the other process to still have access to all of this data that the current process has set. Normally, you wouldn't call free() on a malloc'ed pointer until its use is no longer needed permanently. However, I understand that when this process exits the unmapping happens automatically anyway. Is the data persisted inside the segment, or does that segment just get reserved with it's allotted size and name?
Part 2)
We're now in the process of the other application that needs to access and read from that shared segment. I understand that we now open that segment with shm_open() and then perform the same mapping operation with mmap(). Now we have access to the structure in that segment. When we call munmap() from this process (NOT the one that created the data) it "unlinks" us from that pointer, however the data is still accessible. Does this assume that process 1 (the creator) has NOT called munmap()?
Is the data persisted inside the segment,
Yes.
does that segment just get reserved with it's allotted size and name?
Also yes.
Does this assume that process 1 (the creator) has NOT called munmap()?
No.
The shared memory gets created via shm_create() (as being taken from available OS memory) and from this moment on it carries whichever content had been written into until it is given back to the OS via shm_unlink().
shm_create() and shm_open() act system oriented, in terms of the (shared) memory being a system (not process) specific resource.
mmap() and unmap() act process oriented, that is map and unmap the system resource shared memory into/out-of the process' address space.

what is a named memory block

I know that, in general, a named memory block is shared memory which you can assign and access by a name.
What I want to know is what are the advantages of using a named block of memory and when should it be used in terms of memory management ?
What you are describing has different names depending upon the operating system.
It is a range of pages that can be mapped to the address space of multiple processes. It really has two components:
1) The storage in the page file
2) The physical memory--with paging, there might not be physical memory associated with it all the time.
The name serves as the way of identifying the shared memory so that it can be mapped to the process address space.
It is used for sharing data between processes. They were very commonly used with database systems. They are the fastest method of interprocess communication but require some kind of locking mechanism that the application has to implement. Often they are used with on writer and multiple readers.
If processes A&B map to the shared memory block, and process A writes to the block, B immediately sees the change.

Is it possible to turn a segment of shared memory into private memory?

Say I have a c program (in a linux environment) that uses shared memory to send data to and from several processes. Let's say later in the program the parallel processes finish and I have only one process. Now but I want to fork() off another one process, however this time I don't want that memory segment to be shared, I want both the parent and child process to be able to modify the values without affecting one another, as if it were private memory. Is there any way to do this; convert shared memory to private memory but have it occupy the same space in virtual memory, or make shared memory copy-on-write?
Well, the only way I can think of from a portable POSIX API to do this is to have the child map some new segment of the same size somewhere else (random), copy the data over, and then detach the original segment and re-attach the new segment to the correct address. Sounds ugly.
You can unlink the new segment after you are done to prevent other people from attaching to it.
Now that I look at the man page, if you have the FD to the shm object, you could try re-mmapping the shm object as MAP_PRIVATE in the child at the right address. However ``It is unspecified whether changes made to the file after the mmap() call are visible in the mapped region.'' so you either need to test that and live dangerously or use the other technique.

Resources