When would you use mmap - c

So, I understand that if you need some dynamically allocated memory, you can use malloc(). For example, your program reads a variable length file into a char[]. You don't know in advance how big to make your array, so you allocate the memory in runtime.
I'm trying to understand when you would use mmap(). I have read the man page and to be honest, I don't understand what the use case is.
Can somebody explain a use case to me in simple terms? Thanks in advance.

mmap can be used for a few things. First, a file-backed mapping. Instead of allocating memory with malloc and reading the file, you map the whole file into memory without explicitly reading it. Now when you read from (or write to) that memory area, the operations act on the file, transparently. Why would you want to do this? It lets you easily process files that are larger than the available memory using the OS-provided paging mechanism. Even for smaller files, mmapping reduces the number of memory copies.
mmap can also be used for an anonymous mapping. This mapping is not backed by a file, and is basically a request for a chunk of memory. If that sounds similar to malloc, you are right. In fact, most implementations of malloc will internally use an anonymous mmap to provide a large memory area.
Another common use case is to have multiple processes map the same file as a shared mapping to obtain a shared memory region. The file doesn't have to be actually written to disk. shm_open is a convenient way to make this happen.

Whenever you need to read/write blocks of data of a fixed size it's much simpler (and faster) to simply map the data file on disk to memory using mmap and acess it directly rather than allocate memory, read the file, access the data, potentially write the data back to disk, and free the memory.

consider the famous producer-consumer problem, the producer creates a shared memory object using shm_open(), and since our goal is to make the producer and consumer share data, we use the mmap syscall to map that shared memory region to the process' address space. Now, the consumer can open that shared memory object (shared memory objects are referred to by a "name") and read from it, after a call to mmap to map the address space as done for the producer.

Related

Can memory map be used for an already allocated memory buffer?

I'd like to be able to share a pre-allocated memory between processes.
Searching for an example to do it, I can only find a way to create a new file with shm_open, then use mmap and memcpy. This is a problem since the buffers are quite large and I don't control their allocation (API of a camera).
Is there a way to take an existing, pre allocated buffer, and share its content with another process, without memcpy, using mmap? or using another method other than mmap?

How to memory map data already in memory to a file

I am working on a program which needs to load up to a few hundred images into memory at once. Each file takes up 100mb so I don't really want to be storing all of them in memory. I want to memory map the files so the operating system will swap them out when necessary to save physical memory. Here is what I am wondering. If I already have the data I want in the file in malloced memory should I open a file descriptor, write the data to the file using write() and then map the file. Or can I memory map a new file and then copy the data using memcpy. If I were to create a new file and when I call mmap give it a length large than the file size will it just increase the size of the file on the disk?
From the POSIX standard: “The mmap() function can be used to map a region of memory that is larger than the current size of the object. Memory access within the mapping but beyond the current end of the underlying objects may result in SIGBUS signals being sent to the process.” (http://pubs.opengroup.org/onlinepubs/9699919799/)
That said, you could try mmap() with MAP_FIXED over the same memory region you just wrote from, if you got it from a page-aligned aligned_alloc() rather than malloc(), or free and then mmap(). But note that the OS will page memory you haven’t used for a while to swap anyway, and you can help it out with posix_madvise().

How do I implement dynamic shared memory resizing?

Currently I use shm_open to get a file descriptor and then use ftruncate and mmap whenever I want to add a new buffer to the shared memory. Each buffer is used individually for its own purposes.
Now what I need to do is arbitrarily resize buffers.
And also munmap buffers and reuse the free space again later.
The only solution I can come up with for the first problem is: ftuncate(file_size + old_buffer_size + extra_size), mmap, copy data accross into the new buffer and then munmap the original data. This looks very expensive to me and there is probably a better way. It also entails removing the original buffer every time.
For the second problem I don't even have a bad solution, I clearly can't move memory around everytime a buffer is removed. And if I keep track of free memory and use it whenever possible it will slow down allocation process as well as leave me with bits and pieces in between that are unused.
I hope this is not too confusing.
Thanks
As best as I understand you need to grow (or shrink) the existing memory mapping.
Under linux shared memory implemented as a file, located in /dev/shm memory filesystem. All operations in this file is the same as on the regular files (and file descriptors).
if you want to grow the existing mapping first expand the file size with ftruncate (as you wrote) then use mremap to expand the mapping to the requested size.
If you store pointers points to this region you maybe have to update these, but first try to call with 0 flag. In this case the system tries to grow the existing mapping to the requested size (if there is no collision with other preserved memory region) and pointers remains valid.
If previous option not available use MREMAP_MAYMOVE flag. In this case the system remaps to another locations, but mostly it's done effectively (no copy applied by the system.) then update the pointers.
Shrinking is the same but the reverse order.
I wrote an open source library for just this purpose:
rszshm - resizable pointer-safe shared memory
To quote from the description page:
To accommodate resizing, rszshm first maps a large, private, noreserve
map. This serves to claim a span of addresses. The shared file mapping
then overlays the beginning of the span. Later calls to extend the
mapping overlay more of the span. Attempts to extend beyond the end of
the span return an error.
I extend a mapping by calling mmap with MAP_FIXED at the original address, and with the new size.

How to copy a ram_base file to disk efficiently

I want to copy a large a ram-based file (located at /dev/shm direcotry) to local disk, is there some way for an efficient copy instead of read char one by one or create another piece memory? I can use only C language here. Is there anyway that I can put the memory file directly to disk? Thanks!
I would mmap() the files and do memcpy() between them.
Thanks you guys for the help! I made it by mmap the ram-based file and write the entire block directly to the destination. memcopy was not used because I am actually writing to a parallel file system (pvfs), which does not support mmap operation.
/dev/shm is shared memory, so one way to copy it would be to open it as shared memory, but frankly I don't think you will gain anything.
when writing your memory file to disk, the bottleneck will be the disk.
just be sure to write data in big chunks, and you should be fine.
You can just copy it like any other file:
cp /dev/shm/tmp ~/tmp
So, a quick, simple way is to issue a cp command via system().
You could try to see if the splice system call works for this. I'm not sure if it will since it has some restrictions about the types of files that it can work with, but if it did work you would call it repeatedly with memory page sized (or some multiple page size) requests repeatedly until it finished, and the kernel would handle it very efficently.
If this doesn't work you'll need to do either mmap or do plain old read/write.
Reading and Writing in memory page sized chunks makes things much more efficient. It can be even more efficient if your buffers are memory page size aligned since it opens up the oppurtunity for the kernel to just move the data to/from your process's memory via memory managment trickery rather than actually copying the data around.
The only thing you can do is read() in page size aligned chunks. I'm assuming you need to guarantee the data as written, which is going to mean bypassing buffers via posix_fadvise() or using O_DIRECT (I typically use posix_fadvise(), but O_DIRECT is appropriate here).
In this case, the speed of the media being written to alone dictates how quickly this will happen.
If you don't need to bypass buffers, the operation will complete faster, but there's no guarantee that the data will actually be written in the event of a reboot / power outage / etc. Since the source of the data is in shared memory, I'm (again) guessing you want the write to be guaranteed.
The only thing you can optimize is how long it takes read() to get data from shared memory into your own address space, which page size aligned chunks will improve.

mmaping large files(for persistent large arrays)

I'm implementing persistent large constant arrays via mmap. Is there any tips and tricks or gotchas one should be aware when using mmap?
All pointers that are stored inside the mmap'd region should be done as offsets from the base of the mmap'd region, not as real pointers! You won't necessarily be getting the same base address when you mmap the region on the next run of the program. (I have had to clean up code that made incorrect assumptions about mmap region base address constancy).
This is the most straight forward use case for mmap() so there shouldn't be much to trip you up.
You are effectively just loading a large constant array. Being constants you shouldn't need to worry about synchronization. It would be advisable to make sure the prot parameter is set to PROT_READ only since you won't be writing.
If one or more programs using the constants are going to be continually run, it might be worthwhile to have a separate program that loads the data and keeps it resident. Runs of the other programs then essentially are just doing an shared memory attach rather than continually reading the file into memory.
Make sure you check for restrictions on open file size or memory usage. On Linux there is a built in shell command ulimit. Run as ulimit -a to see the current settings.
Flush writes to the in-memory array to the file with the msync(2) syscall or else they may stay in memory until munmap(2) and there may be a power outage or something before then!
If multiple processes are mmap'ing the same memory region shared with read and write privileges, make sure that only one is writing to it at a time to avoid corrupting your data. Or use file locking or some other means of synchronization.

Resources