register large buffer for RDMA in Linux kernel module - kernel-module

i'm a newbie experimenting a project using rdma (ib_verbs) in kernel module. I got the example code from krping and tinkering on it. The system run on 64bits Linux Centos with a custom 3.10 Linux kernel that require transparent huge pages disabled.
I want a large (4GB up) of RDMA read/write able space which doesn't have to be contiguous as i'll most likely write/read at most 1MB at a time from remote party (random access).
Question:
Should i just do a thousand times of 4MB kmalloc and register DMA region? How bad it is, design wise for allocating large chuck of memory using kmalloc instead of vmalloc? I heard it should not be done and large memory should only retrieved via vmalloc. But addresses from vmalloc are not good for DMA.
If not then what would be a good alternative way to have a 4GB buffer that can be random access from remote party?
How does user-space rdma manage this kind of buffer? I remembered that i only malloc 4GB of memory and call ibv_reg_mr and it is ready to use.

As long as you're not using a memory that covers the entire physical memory (which isn't recommended for write-enabled MRs), you should use the IB_WR_REG_MR work request to register your memory region. For that, you would use the ib_map_mr_sg function which accepts a scatterlist and a page size. So basically, you can register an MR that is built with chunks of a fixed size that you choose.
There's a tradeoff here: using small allocation size will allow the kernel to find free memory easier on fragmented systems, but on the other hand it could decrease performance, as it can increase the load on the NIC's IOTLB.
User-space handles large MR registration by calling get_user_pages and using the system's page size (normally 4kb). Though some drivers have optimizations to try and detect larger page sizes internally, if the user-space memory happens to align that way.

Related

How does kernel restrict processes to their own memory pool?

This is purely academical question not related to any OS
We have x86 CPU and operating memory, this memory resembles some memory pool, that consist of addressable memory units that can be read or written to, using their address by MOV instruction of CPU (we can move memory from / to this memory pool).
Given that our program is the kernel, we have a full access to whole this memory pool. However if our program is not running directly on hardware, the kernel creates some "virtual" memory pool which lies somewhere inside the physical memory pool, our process consider it just as the physical memory pool and can write to it, read from it, or change its size usually by calling something like sbrk or brk (on Linux).
My question is, how is this virtual pool implemented? I know I can read whole linux source code and maybe one year I find it, but I can also ask here :)
I suppose that one of these 3 potential solutions is being used:
Interpret the instructions of program (very ineffective and unlikely): the kernel would just read the byte code of program and interpret each instruction individually, eg. if it saw a request to access memory the process isn't allowed to access it wouldn't let it.
Create some OS level API that would need to be used in order to read / write to memory and disallow access to raw memory, which is probably just as ineffective.
Hardware feature (probably best, but have no idea how that works): the kernel would say "dear CPU, now I will send you instructions from some unprivileged process, please restrict your instructions to memory area 0x00ABC023 - 0xDEADBEEF" the CPU wouldn't let the user process do anything wrong with the memory, except for that range approved by kernel.
The reason why am I asking, is to understand if there is any overhead in running program unprivileged behind the kernel (let's not consider overhead caused by multithreading implemented by kernel itself) or while running program natively on CPU (with no OS), as well as overhead in memory access caused by computer virtualization which probably uses similar technique.
You're on the right track when you mention a hardware feature. This is a feature known as protected mode and was introduced to x86 by Intel on the 80286 model. That evolved and changed over time, and currently x86 has 4 modes.
Processors start running in real mode and later a privileged software (ring0, your kernel for example) can switch between these modes.
The virtual addressing is implemented and enforced using the paging mechanism (How does x86 paging work?) supported by the processor.
On a normal system, memory protection is enforced at the MMU, or memory management unit, which is a hardware block that configurably maps virtual to physical addresses. Only the kernel is allowed to directly configure it, and operations which are illegal or go to unmapped pages raise exceptions to the kernel, which can then discipline the offending process or fetch the missing page from disk as appropriate.
A virtual machine typically uses CPU hardware features to trap and emulate privileged operations or those which would too literally interact with hardware state, while allowing ordinary operations to run directly and thus with moderate overall speed penalty. If those are unavailable, the whole thing must be emulated, which is indeed slow.

Large PnP driver buffer

I'm developing a kernel PnP driver to map my FPGA. I need four 32Mb buffer of contiguous memory as I use a non scatter gather DMA. Right now I have a problem allocating them with WdfCommonBufferCreate as sometime it works, sometimes don't. I don't understand why the allocation fails sporadically as the device memory and devices does not changes.
Is there a way to ensure my buffer will be created? What can cause the sporadic fail?
I also thought removing 128Mb from Windows with Bcdedit and use the space left for my buffer. I have no problem doing that since the driver is for a specific platform in a controlled environnement but I did not found a way to know memory size with Windows Driver API.
Is there a way to know the size of actual memory? Can I actually use the memory left and if yes, how?
Thanks for your help
That's a lot of contiguous memory. The Windows Driver Framework can break up large DMA transactions into a size your driver can handle, if you tell it the maximum amount of scatter/gather descriptors you have through WdfDmaEnablerSetMaximumScatterGatherElements. Just use a smaller fixed number of scatter/gather elements.

Is cudaHostRegister equivalent to mlock() system call?

Pinned or page-locked memory is transferred faster to GPUs compared to not-locked memory.
CUDA provides the cudaHostAlloc and cudaHostRegister calls to allocate or register page-locked memory. The Nvidia driver then checks upon a memory transfer if the host memory is locked and issues according copy code paths.
Is it possible to page-lock memory with the system call mlock() achieving exactly the same effect (regards to transfer speeds) as cudaHostRegister ? Or does the according CUDA call update an internal database which the driver queries?
I think the NVIDIA driver maintains its own page-locked memory accessible via cudaHostAlloc etc. The system call mlock uses the kernel locking which is technically equivalent to what the driver does, but the kernel page-locking is very resource restricted RLIMIT_MEMLOCK which is very small. Thus NVIDIA driver uses its own page-locking mechanism. And they warn about excessive usage, since it steals lots of memory accessible to the rest of the kernel.
So, cudaHostRegister is equivalent to mlock() in the sense that it page-locks memory, but not in the sense that it is bound to resource limitations. And not in the sense, that cudaMemcpy is accelerated.
They are not equivalent. cuMemHostRegister() page-locks the memory, but also maps it into the page tables of the GPU (or, if portable, GPUs) so the GPU can access it directly. If you page-lock memory without mapping it for the GPU, it looks to the GPU just like any other memory.

Virtual Memory allocation without Physical Memory allocation

I'm working on a Linux kernel project and i need to find a way to allocate Virtual Memory without allocating Physical Memory. For example if I use this :
char* buffer = my_virtual_mem_malloc(sizeof(char) * 512);
my_virtual_mem_malloc is a new SYSCALL implemented by my kernel module. All data written on this buffer is stocked on file or on other server by using socket (not on Physical Memory). So to complete this job, i need to request Virtual Memory and get access to the vm_area_struct structure to redefine vm_ops struct.
Do you have any ideas about this ?
thx
This is not architecturally possible. You can create vm areas that have a writeback routine that copies data somewhere, but at some level, you must allocate physical pages to be written to.
If you're okay with that, you can simply write a FUSE driver, mount it somewhere, and mmap a file from it. If you're not, then you'll have to just write(), because redirecting writes without allocating a physical page at all is not supported by the x86, at the very least.
There are a few approaches to this problem, but most of them require you to first write to an intermediate memory.
Network File System (NFS)
The easiest approach is simply to have the server open some sort of a shared file system such as NFS and using mmap() to map a remote file to a memory address. Then, writing to that address will actually write the OS's page cache, wich will eventually be written to the remote file when the page cache is full or after predefined system timeout.
Distributed Shared Memory (DSM)
An alternative approach is using DSM with a very small cache size.
In computer science, distributed shared memory (DSM) is a form of memory architecture where physically separated memories can be addressed as one logically shared address space.
[...] Software DSM systems can be implemented in an operating system, or as a programming library and can be thought of as extensions of the underlying virtual memory architecture. When implemented in the operating system, such systems are transparent to the developer; which means that the underlying distributed memory is completely hidden from the users.
It means that each virtual address is logically mapped to a virtual address on a remote machine and writing to it will do the following: (a) receive the page from the remote machine and gain exclusive access. (b) update the page data. (c) release the page and send it back to the remote machine when it reads it again.
On typical DSM implementation, (c) will only happen when the remote machine will read the data again, but you might start from existing DSM implementation and change the behavior so that the data is sent once the local machine page cache is full.
I/O MMU
[...] the IOMMU maps device-visible virtual addresses (also called device addresses or I/O addresses in this context) to physical addresses.
This basically means to write directly to the network device buffer, which is actually implementing an alternative driver for that device.
Such approach seems the most complicated and I don't see any benefit from that approach.
This approach is actually not using any intermediate memory but is definitely not recommended unless the system has a heavy realtime requirement.

How to use more than 3 GB in a process on 32-bit PAE-enabled Linux app?

PAE (Physical Address Extension) was introduced in CPUs back in 1994. This allows a 32-bit processor to access 64 GB of memory instead of 4 GB. Linux kernels offer support for this starting with 2.3.23. Assume I am booting one of these kernels, and want to write an application in C that will access more than 3 GB of memory (why 3 GB? See this).
How would I go about accessing more than 3 GB of memory? Certainly, I could fork off multiple processes; each one would get access to 3 GB, and could communicate with each other. But that's not a realistic solution for most use cases. What other options are available?
Obviously, the best solution in most cases would be to simply boot in 64-bit mode, but my question is strictly about how to make use of physical memory above 4 GB in an application running on a PAE-enabled 32-bit kernel.
You don't, directly -- as long as you're running on 32-bit, each process will be subject to the VM split that the kernel was built with (2GB, 3GB, or if you have a patched kernel with the 4GB/4GB split, 4GB).
One of the simplest ways to have a process work with more data and still keep it in RAM is to create a shmfs and then put your data in files on that fs, accessing them with the ordinary seek/read/write primitives, or mapping them into memory one at a time with mmap (which is basically equivalent to doing your own paging). But whatever you do it's going to take more work than using the first 3GB.
Or you could fire up as many instances of memcached as needed until all physical memory is mapped. Each memcached instance could make 3GiB available on a 32 bit machine.
Then access memory in chunks via the APIs and language bindings for memcached. Depending on the application, it might be almost as fast as working on a 64-bit platform directly. For some applications you get the added benefit of creating a scalable program. Not many motherboards handle more than 64GiB RAM but with memcached you have easy access to as much RAM as you can pay for.
Edited to note, that this approach of course works in Windows too, or any platform which can run memcached.
PAE is an extension of the hardware's address bus, and some page table modifications to handle that. It doesn't change the fact that a pointer is still 32 bits, limiting you to 4G of address space in a single process. Honestly, in the modern world the proper way to write an application that needs more than 2G (windows) or 3G (linux) of address space is to simply target a 64 bit platform.
On Unix one way to access that more-than 32bit addressable memory in user space by using mmap/munmap if/when you want to access a subset of the memory that you aren't currently using. Kind of like manually paging. Another way (easier) is to implicitly utilize the memory by using different subsets of the memory in multiple processes (if you have a multi-process archeteticture for your code).
The mmap method is essentially the same trick as commodore 128 programmers used to do for bank switching. In these post commodore-64 days, with 64-bit support so readily available, there aren't many good reasons to even think about it;)
I had fun deleting all the hideous PAE code from our product a number of years ago.
You can't have pointers pointing to > 4G of address space, so you'd have to do a lot of tricks.
It should be possible to switch a block of address space between different physical pages by using mmap to map bits of a large file; you can change the mapping at any time by another call to mmap to change the offset into the file (in multiples of the OS page size).
However this is a really nasty technique and should be avoided. What are you planning on using the memory for? Surely there is an easier way?
Obviously, the best solution in most cases would be to simply boot in 64-bit mode, but my question is strictly about how to make use of physical memory above 4 GB in an application running on a PAE-enabled 32-bit kernel.
There's nothing special you need to do. Only the kernel needs to address physical memory, and with PAE, it knows how to address physical memory above 4 GB. The application will use memory above 4 GB automatically and with no issues.

Resources