How can you limit RAM consumption in a process? - c

How can you limit the physical memory consumption of a C program from within the source code on a linux 2.6.32 machine?
I need to determine the type of page replacement algorithm the system is using.
The problem is that without limiting the number of pages a process can have in memory, it becomes difficult to analyze the pattern of page faults to determine the page replacement algorithm.
Also, I don't have root access on the machine.

setrlimit(RLIMIT_MEMLOCK, ...).

Related

register large buffer for RDMA in Linux kernel module

i'm a newbie experimenting a project using rdma (ib_verbs) in kernel module. I got the example code from krping and tinkering on it. The system run on 64bits Linux Centos with a custom 3.10 Linux kernel that require transparent huge pages disabled.
I want a large (4GB up) of RDMA read/write able space which doesn't have to be contiguous as i'll most likely write/read at most 1MB at a time from remote party (random access).
Question:
Should i just do a thousand times of 4MB kmalloc and register DMA region? How bad it is, design wise for allocating large chuck of memory using kmalloc instead of vmalloc? I heard it should not be done and large memory should only retrieved via vmalloc. But addresses from vmalloc are not good for DMA.
If not then what would be a good alternative way to have a 4GB buffer that can be random access from remote party?
How does user-space rdma manage this kind of buffer? I remembered that i only malloc 4GB of memory and call ibv_reg_mr and it is ready to use.
As long as you're not using a memory that covers the entire physical memory (which isn't recommended for write-enabled MRs), you should use the IB_WR_REG_MR work request to register your memory region. For that, you would use the ib_map_mr_sg function which accepts a scatterlist and a page size. So basically, you can register an MR that is built with chunks of a fixed size that you choose.
There's a tradeoff here: using small allocation size will allow the kernel to find free memory easier on fragmented systems, but on the other hand it could decrease performance, as it can increase the load on the NIC's IOTLB.
User-space handles large MR registration by calling get_user_pages and using the system's page size (normally 4kb). Though some drivers have optimizations to try and detect larger page sizes internally, if the user-space memory happens to align that way.

Are processes "sandboxed" by hardware?

Can a process access all of the RAM or does the CPU give the process a specific part which the kernel decides, and the process (running in user space) can't change? In other words - is a process sandboxed by hardware, or can it do anything, but is monitored by the OS?
EDIT
I'm told in the comments that this is too broad, so let's assume x86/x64. I'll also add that the question arose while reading what I understood to say that processes can access all RAM - which seems to conflict with what I've read about security in OSs.
If you count MS-DOS as an "operating system", then processes can do anything (and aren't monitored). Even Windows95 doesn't have real memory protection, and a buggy process can crash the machine by scribbling over the wrong memory.
If you only count modern OSes with privilege separation (Unix/Linux, Windows NT and derivates), then processes are sandboxed.
AFAIK, there aren't really systems where there's monitoring of any kind other than "fault if you try to do something". The kernel sets the boundaries, and the user-space process gets a fault if it tries to go outside them.
If you're imagining that maybe the kernel looks at what an unprivileged process does, and adapts accordingly, then no, that's not what happens.
See
https://en.wikipedia.org/wiki/Memory_protection: Usually achieved by giving each process its own virtual address space (virtual memory). This is hardware-supported: every address your code uses is translated to a physical address by a fast translation cache (TLB), which caches the translation tables set up by the OS (aka page tables).
A process can't directly modify its own page tables: it has to ask the kernel to map more physical memory into its address space (e.g. as part of malloc()). So the kernel has a chance to verify that the request is ok before doing it.
Also, a process can ask the kernel to copy data to/from files (or other things) into its memory space. (write/read system calls).
https://en.wikipedia.org/wiki/User_space: normal processes run in user-mode, which is a mode provided by the hardware where privileged instructions will trap to the kernel.

How does kernel restrict processes to their own memory pool?

This is purely academical question not related to any OS
We have x86 CPU and operating memory, this memory resembles some memory pool, that consist of addressable memory units that can be read or written to, using their address by MOV instruction of CPU (we can move memory from / to this memory pool).
Given that our program is the kernel, we have a full access to whole this memory pool. However if our program is not running directly on hardware, the kernel creates some "virtual" memory pool which lies somewhere inside the physical memory pool, our process consider it just as the physical memory pool and can write to it, read from it, or change its size usually by calling something like sbrk or brk (on Linux).
My question is, how is this virtual pool implemented? I know I can read whole linux source code and maybe one year I find it, but I can also ask here :)
I suppose that one of these 3 potential solutions is being used:
Interpret the instructions of program (very ineffective and unlikely): the kernel would just read the byte code of program and interpret each instruction individually, eg. if it saw a request to access memory the process isn't allowed to access it wouldn't let it.
Create some OS level API that would need to be used in order to read / write to memory and disallow access to raw memory, which is probably just as ineffective.
Hardware feature (probably best, but have no idea how that works): the kernel would say "dear CPU, now I will send you instructions from some unprivileged process, please restrict your instructions to memory area 0x00ABC023 - 0xDEADBEEF" the CPU wouldn't let the user process do anything wrong with the memory, except for that range approved by kernel.
The reason why am I asking, is to understand if there is any overhead in running program unprivileged behind the kernel (let's not consider overhead caused by multithreading implemented by kernel itself) or while running program natively on CPU (with no OS), as well as overhead in memory access caused by computer virtualization which probably uses similar technique.
You're on the right track when you mention a hardware feature. This is a feature known as protected mode and was introduced to x86 by Intel on the 80286 model. That evolved and changed over time, and currently x86 has 4 modes.
Processors start running in real mode and later a privileged software (ring0, your kernel for example) can switch between these modes.
The virtual addressing is implemented and enforced using the paging mechanism (How does x86 paging work?) supported by the processor.
On a normal system, memory protection is enforced at the MMU, or memory management unit, which is a hardware block that configurably maps virtual to physical addresses. Only the kernel is allowed to directly configure it, and operations which are illegal or go to unmapped pages raise exceptions to the kernel, which can then discipline the offending process or fetch the missing page from disk as appropriate.
A virtual machine typically uses CPU hardware features to trap and emulate privileged operations or those which would too literally interact with hardware state, while allowing ordinary operations to run directly and thus with moderate overall speed penalty. If those are unavailable, the whole thing must be emulated, which is indeed slow.

Can I check for disk access at runtime?

I'm running some very specialized experiments for a research project. These experiments call for controlling memory accesses: my application should not, under any circumstances, swap information with the disk. That is, all information the application needs must stay in RAM for the duration of the execution, but it should use as much RAM as possible.
My question is: is there any way I can control disk access by my application, or at least count disk accesses for later analysis?
This is using C and Linux.
Please let me know if I can clarify the question... been working on this for so long I think everybody knows exactly what I'm talking about.
One thing you can do is actually create a ramfs or RAM file system. Are you working on a unix platform? If so you can check out mount and umount on how to create them.
http://linux.die.net/man/8/mount
http://linux.die.net/man/8/umount
Basically what you do is you create a file system stored in your RAM. You don't have to deal with all the disk read/write time anymore. If i read your question correctly you want to try avoiding disk access if you can. It's very simple to do really since you can have multiple file systems located on both a hard drive and memory.
http://www.cyberciti.biz/faq/howto-create-linux-ram-disk-filesystem/
http://www.alper.net/linuxunix/linux-ram-based-filesystem/
Hope this all helped.
The mlock system call allows you to lock part or all of your process's virtual memory to RAM, thus preventing it from being written to swap space. Notice that another process with root priviledges can still that memory area.

How to use more than 3 GB in a process on 32-bit PAE-enabled Linux app?

PAE (Physical Address Extension) was introduced in CPUs back in 1994. This allows a 32-bit processor to access 64 GB of memory instead of 4 GB. Linux kernels offer support for this starting with 2.3.23. Assume I am booting one of these kernels, and want to write an application in C that will access more than 3 GB of memory (why 3 GB? See this).
How would I go about accessing more than 3 GB of memory? Certainly, I could fork off multiple processes; each one would get access to 3 GB, and could communicate with each other. But that's not a realistic solution for most use cases. What other options are available?
Obviously, the best solution in most cases would be to simply boot in 64-bit mode, but my question is strictly about how to make use of physical memory above 4 GB in an application running on a PAE-enabled 32-bit kernel.
You don't, directly -- as long as you're running on 32-bit, each process will be subject to the VM split that the kernel was built with (2GB, 3GB, or if you have a patched kernel with the 4GB/4GB split, 4GB).
One of the simplest ways to have a process work with more data and still keep it in RAM is to create a shmfs and then put your data in files on that fs, accessing them with the ordinary seek/read/write primitives, or mapping them into memory one at a time with mmap (which is basically equivalent to doing your own paging). But whatever you do it's going to take more work than using the first 3GB.
Or you could fire up as many instances of memcached as needed until all physical memory is mapped. Each memcached instance could make 3GiB available on a 32 bit machine.
Then access memory in chunks via the APIs and language bindings for memcached. Depending on the application, it might be almost as fast as working on a 64-bit platform directly. For some applications you get the added benefit of creating a scalable program. Not many motherboards handle more than 64GiB RAM but with memcached you have easy access to as much RAM as you can pay for.
Edited to note, that this approach of course works in Windows too, or any platform which can run memcached.
PAE is an extension of the hardware's address bus, and some page table modifications to handle that. It doesn't change the fact that a pointer is still 32 bits, limiting you to 4G of address space in a single process. Honestly, in the modern world the proper way to write an application that needs more than 2G (windows) or 3G (linux) of address space is to simply target a 64 bit platform.
On Unix one way to access that more-than 32bit addressable memory in user space by using mmap/munmap if/when you want to access a subset of the memory that you aren't currently using. Kind of like manually paging. Another way (easier) is to implicitly utilize the memory by using different subsets of the memory in multiple processes (if you have a multi-process archeteticture for your code).
The mmap method is essentially the same trick as commodore 128 programmers used to do for bank switching. In these post commodore-64 days, with 64-bit support so readily available, there aren't many good reasons to even think about it;)
I had fun deleting all the hideous PAE code from our product a number of years ago.
You can't have pointers pointing to > 4G of address space, so you'd have to do a lot of tricks.
It should be possible to switch a block of address space between different physical pages by using mmap to map bits of a large file; you can change the mapping at any time by another call to mmap to change the offset into the file (in multiples of the OS page size).
However this is a really nasty technique and should be avoided. What are you planning on using the memory for? Surely there is an easier way?
Obviously, the best solution in most cases would be to simply boot in 64-bit mode, but my question is strictly about how to make use of physical memory above 4 GB in an application running on a PAE-enabled 32-bit kernel.
There's nothing special you need to do. Only the kernel needs to address physical memory, and with PAE, it knows how to address physical memory above 4 GB. The application will use memory above 4 GB automatically and with no issues.

Resources