create_module - why is copy_from_user used? - c

I'm reading LDD3. In chapter 8, I could not understand this paragraph:
An example of a function in the kernel that uses vmalloc is the create_module system
call, which uses vmalloc to get space for the module being created. Code and data of
the module are later copied to the allocated space using copy_from_user. In this way,
the module appears to be loaded into contiguous memory.
Why is copy_from_user used? Aren't we in kernel space only?

Recall that kernel modules are loaded by the insmod (or modprobe) command, which runs in user space. These commands load the kernel module from disk into memory, then pass it to the kernel, which must use copy_from_user() to copy that to kernel memory.

Related

Linux: Mapping debugee memory into debugger memory space

Basically, I want to avoid system calls for reading to/writing from the debugee memory space. I only want to map a single mapping from /proc/pid/maps, I tried just mmap()ing from /proc/pid/mem but turns out procfs doesn't support mmap.
Tuxifan

About memory allocation, does C malloc/calloc depends on Linux mmap/malloc or the opposite?

As far as I know, C has the following functions, e.g: malloc, calloc, realloc, to allocate memory. And the linux kernel also has the following functions, e.g: malloc, mmap, kmalloc, vmalloc... to allocate memory.
I want to know which is the lowest function.
If you say, "Linux kernel is the lowest function, your C program must allocate memory with Linux kernel", then how does the Linux kernel allocate it's own memory?
Or if you say, "Linux kernel is the lowest function.", then when I write a C program and run in the Linux system, to allocate memory, I should through the system call.
Hope to have an answer.
I want to know which is the lowest function.
The user-level malloc function calls brk or malloc (depending on the library used and depending on the Linux version).
... how does the Linux kernel allocate it's own memory?
On a system without MMU this is easy:
Let's say we have system with 8 MB RAM and we know that the address of the RAM is 0x20000000 to 0x20800000.
The kernel contains an array that contains information about which pages are in use. Let's say the size of a "page" is 0x1000 (this is the page size in x86 systems with MMU).
In old Linux versions the array was named mem_map. Each element in the array corresponds to one memory "page". It is zero if the page is free.
When the system is started, the kernel itself initializes this array (writes the initial values in the array).
If the kernel needs one page of memory, it searches for an element in the array whose value is 0. Let's say mem_map[0x123] is 0. The kernel now sets mem_map[0x123]=1;. mem_map[0x123] corresponds to the address 0x20123000, so the kernel "allocated" some memory at address 0x20123000.
If the kernel wants to "free" some memory at address 0x20234000, it simply sets mem_map[0x234]=0;.
On a system with MMU, it is a bit more complicated but the principle is the same.
On the linux OS, the C functions malloc, calloc, realloc used by user mode programs are implemented in the C library and handle pages of memory mapped in the process address space using the mmap system call. mmap associates pages of virtual memory with addresses in the process address space. When the process then accesses these addresses, actual RAM is mapped by the kernel to this virtual space. Not every call to malloc maps memory pages, only those for which not enough space was already requested from the system.
In the kernel space, a similar process takes place but the caller can require that the RAM be mapped immediately.

Mapping Reserved High Memory to User Space via remap_pfn_range

Arch=x86_64
I am working through a DMA solution following the process outlined in this question,
Direct Memory Access in Linux
My call to ioremap successfully returns with an address, pt.
In my call to remap_pfn_range I use, virt_to_phys(pt) >> PAGE_SHIFT, to specify the pfn of the area generated by the ioremap call.
When the userspace application using mmap executes and the call to remap_pfn_range is made, the machine crashes. I assume the mapping is off and I am forcing the system to use memory that is already allocated (screen glitches before exit), however I'm not clear on where the mismatch is occurring. The system has 4 Gigs of Ram and I reserved 2Gigs by using the kernel boot option mem=2048M.
I use BUFFER_SIZE=1024u*1024u*1024u and BUFFER_OFFSET=2u*1024u*1024u*1024u.
putting these into pt=ioremap(BUFFER_SIZE,BUFFER_OFFSET) I believe pt should equal a virtual address to the physical memory located at the 2GB boundary up to the 3GB boundary. Is this assumption accurate?
When I execute my kernel module, but I change my remap_pfn_range to use vma->vm_pgoff>>PAGE_SHIFT as the target pfn the code executes with no error and I can read and write to the memory. However this is not using the reserved physical memory that I intended.
Since everything works when using vma->vm_pgoff>>PAGE_SHIFT I believe my culprit is between my ioremap and the remap_pfn_range
Thanks for any suggestions!
The motivation behind the use of this kernel module is the need for large contiguous buffers for DMA from a PCI device. In this application, recompiling the kernel isn't an option so I'm trying to accomplish it with a module + hardware.
My call to ioremap successfully returns with an address, pt.
In my call to remap_pfn_range I use, virt_to_phys(pt) >> PAGE_SHIFT,
to specify the pfn of the area generated by the ioremap call.
This is illegal, because ioremap reserves virtual region in vmalloc area. The virt_to_phys() is OK only for linearly mapped part of memory.
putting these into pt=ioremap(BUFFER_SIZE,BUFFER_OFFSET) I believe pt
should equal a virtual address to the physical memory located at the
2GB boundary up to the 3GB boundary. Is this assumption accurate?
That is not exactly true, for example on my machine
cat /proc/iomem
...
00001000-0009ebff : System RAM
...
00100000-1fffffff : System RAM
...
There may be several memory banks, and the memory not obligatory will start at address 0x0 of physical address space.
This might be usefull for you Dynamic DMA mapping Guide

mmap() in linux kernel to access unmapped memory

I am trying to use the mamp() functionality provided in linux-kernel.
As we call mmap() in user-space we try to map virtual memory area of user-space process to the memory in the kernel-space.
the definition of mamp() inside kernel is done in my kernel module which try to allocate some memory in pages & maps it during mmap system call. The memory content of this kernel-space memory could be filled by this module.
The question i want to ask is that after memory mapping the user-space process could access the mapped memory directly with-out any extra kernel overload so there will be no system-call like read() but if the memory(allocated inside kernel-space & mapped in the kernel-space) is containing the pointer to other memory(not mapped) allocated inside the kernel-space then could the user-space process be able to access this unmapped memory with the help of mapped memory's content which are pointer to this unmapped memory.
No, userspace can't chase pointers in mapped memory that point to unmapped kernel memory.
No user-space process can not be able to access the unmapped memory. Kernel wont allow you to access that memory.
You are able to access only that portion of memory which is mapped via mmap.
I think use can use remap_pfn_range function explicitly to remapping the region.
From Linux mmap man page
The effect of changing
the size of the underlying file of a mapping on the pages that correspond to
added or removed regions of the file is unspecified.
No,you can't.
However,If your purpose is to change your mmaped area on the fly,Here are some options:
A. In user space, you can use mremap which expands (or shrinks) an existing memory mapping.
B. In kernel space,in your driver, you need to implement nopage() method or remap_pfn_range,but remap_pfn_range has its limitation which Linux only gives the reserved pages and you even cant remap normal address,such as the one allocated by get_free_page()

Do shared libraries use the same heap as the application?

Say I have an application in Linux that uses shared libraries (.so files). My question is whether the code in those libraries will allocate memory in the same heap as the main application or do they use their own heap?
So for example, some function in the .so file calls malloc, would it use the same heap manager as the application or another one? Also, what about the global data in those shared memories. Where does it lie? I know for the application it lies in the bss and data segment, but don't know where it is for those shared object files.
My question is whether the code in those libraries will allocate memory in the same heap as the main application or do they use their own heap?
If the library uses the same malloc/free as the application (e.g. from glibc) - then yes, program and all libraries will use the single heap.
If library uses mmap directly, it can allocate memory which is not the memory used by program itself.
So for example, some function in the .so file calls malloc, would it use the same heap manager as the application or another one?
If function from .so calls malloc, this malloc is the same as malloc called from program. You can see symbol binding log in Linux/glibc (>2.1) with
LD_DEBUG=bindings ./your_program
Yes, several instances of heap managers (with default configuration) can't co-exist without knowing about each other (the problem is with keeping brk-allocated heap size synchronized between instances). But there is a configuration possible when several instances can co-exist.
Most classic malloc implementations (ptmalloc*, dlmalloc, etc) can use two methods of getting memory from the system: brk and mmap. Brk is the classic heap, which is linear and can grow or shrink. Mmap allows to get lot of memory in anywhere; and you can return this memory back to the system (free it) in any order.
When malloc is builded, the brk method can be disabled. Then malloc will emulate linear heap using only mmaps or even will disable classic linear heap and all allocations will be made from discontiguous mmaped fragmens.
So, some library can have own memory manager, e.g. malloc compiled with brk disabled or with non-malloc memory manager. This manager should have function names other than malloc and free, for example malloc1 and free1 or should not to show/export this names to dynamic linker.
Also, what about the global data in those shared memories. Where does it lie? I know for the application it lies in the bss and data segment, but don't know where it is for those shared object files.
You should think both about program and .so just as ELF files. Every ELF file has "program headers" (readelf -l elf_file). The way how data is loaded from ELF into memory depends on program header's type. If the type is "LOAD", corresponding part of file will be privately mmaped (Sic!) to memory. Usually, there are 2 LOAD segments; first one for code with R+X (read+execute) flags and second is for data with R+W (read+write) flags. Both .bss and .data (global data) sections are placed in the segment of type LOAD with Write enabled flag.
Both executable and shared library has LOAD segments. Some of segments has memory_size > file_size. It means that segment will be expanded in memory; first part of it will be filled with data from ELF file, and the second part of size (memory_size-file_size) will be filled with zero (for *bss sections), using mmap(/dev/zero) and memset(0)
When Kernel or Dynamic linker loads ELF file into memory, they will not think about sharing. For example, you want to start same program twice. First process will load read-only part of ELF file with mmap; second process will do the same mmap (if aslr is active - second mmap will be into different virtual address). It is task of Page cache (VFS subsystem) to keep single copy of data in physical memory (with COPY-on-WRITE aka COW); and mmap will just setup mappings from virtual address in each process into single physical location. If any process will change a memory page; it will be copied on write to unique private physical memory.
Loading code is in glibc/elf/dl-load.c (_dl_map_object_from_fd) for ld.so and linux-kernel/fs/binfmt_elf.c for kernel's ELF loader (elf_map, load_elf_binary). Do a search for PT_LOAD.
So, global data and bss data is always privately mmaped in each process, and they are protected with COW.
Heap and stack are allocated in run-time with brk+mmap (heap) and by OS kernel automagically in brk-like process (for stack of main thread). Additional thread's stacks are allocated with mmap in pthread_create.
Symbol tables are shared across an entire process in Linux. malloc() for any part of the process is the same as all the other parts. So yes, if all parts of a process access the heap via malloc() et alia then they will share the same heap.

Resources