So I have just started customizing the FreeBSD kernel, but unfortunately the resources available for FreeBSD development are scarce .
Im writing a systemcall in which should read a file(optionally), read the blocks of physical memory according to input and write the results into another file(generally "filename.results")
my problems are:
Standard C libraries: it seems to be that they are unavailable for kernel module programming so how should I replace the functions such as write and read(and strlen and some others in string.h)?
Malloc function: it seems that it accepts 3 inputs instead of 1, and I have no idea how to fill the 2nd variable even after reading the man page(tried FOO but returns symlink error).
Also I was interested in any other topics u think they are useful for this routine.
In case of malloc, do "man 9 malloc". The "9" here means section describing kernel functions, userland malloc is described in section 3.
Well I've said that I got the answer.
So for future reads I'm just leaving it here.
MALLOC: you need to define your own memory description(or use an existing one) in order to be able to locate it, that's a POSIX standard and its for sanity check purposes.
as for the other things, for the fact that standard c libraries are not available in kernel mode, the kernel variant of them is likely available in libkern (open /sys/libkern), and they will be all available once you implement it(say uprintf, strlen and stuff), if its not there you have to call the relying module by implementing them in your header file(say for FILE interaction you need to include the I/O module located in /sys/(dir)) since you ARE in kernel mode it doesn't create a problem.(also note that those functions are well implemented so you wont likely face a kernel crash.)
As an obvious fact you have to copy the buffer from user memory to kernel memory in order to do modifications on it, and copy it back when you are done.
one last thing, in order to implement your systemcall via sysproto auto build you need to include it as well(and add your syscall to the list). and don't forget to include your file in the source file configuration file (located in /sys/(dir) again).
Related
I was surprised to discover the man pages having entries for two conflicting variants of readdir.
in READDIR(2), it specifically states you do not want to use it:
This is not the function you are interested in. Look at readdir(3) for the POSIX conforming C library interface. This page documents the bare kernel system call interface, which is superseded by getdents(2).
I understand a function may become deprecated when another function comes along and does its job better, but I am not familiar with other cases of a userspace function coming in and replacing a kernel function of the same name. Is there a known reason it was chosen to go this route rather than coming up with a new function name (as the man page mentions getdents did when superseding readdir).
The programming interface, POSIX, is stable. You don't just go replacing functions in it unnecessarily because you want to implement the backend more efficiently. The Linux syscall readdir never implemented the readdir function because it has the wrong signature; it was an old, inefficient backend for implementing the readdir function. When a better backend came along, it was obsolete.
You have it completely backwards: it's the library function readdir(3) which predates Linux and its readdir(2) system call, and not the reverse.
Naming the syscall that way was certainly a poor decision, and probably has a story behind it, but it's pretty much irrelevant now, as nobody is using it.
On Unix, directories used to be simple files formatted in a special way, and the system call interface through which they were read was just read(2) [1]. Later systems introduced system calls like getdirentries (44BSD) and getdents (SVR3), but they weren't willing or capable to standardize on an interface, so we're still stuck with the high level and broken [2] readdir(3) library function as the only standard interface for reading a directory.
[1] On some systems like BSD you can still cat a directory, at least when using the default filesystem (FFS).
[2] it's broken because it's not signal safe, and it returns NULL for both error and EOF, which means that the only way it could be safely used is by first setting errno to 0, and checking both its return value and errno afterwards. Yuck.
I have a couple of assumptions, most likely some of them will be incorrect. Please correct me where they are wrong.
We could categorize the functions in a program written in C as follows:
Functions that are sent to dynamically loaded libraries:
These are sent to the library that translates them in to multiple standard C-functions
The library passes them on to libc where they are translated into multiple system calls.
Libc passes those on to the kernel where they are executed and the returns are sent back to libc.
Libc will collect the returs, group them by c-function and use them to create 1 return for each c-function. These returns will be send back to the dynamically loaded library.
This library will collect all returns and use them to create 1 return that is send back to the original program.
Functions that are either defined in the code or part of statically compiled libraries: Everything is the same as the category above but:
They program already does the translation into standard C functions where they are known or into functions calling dynamically loaded libraries in the other case.
The standard c functions are send to libc, the others to the dynamically loaded libraries (where they will be handled as above).
The creation of 1 final return based on the returns from both types of functions will be done by the program
Functions that are standard C functions: They will just be sent to libc which will communicate with the kernel in the same way as above and 1 return will be sent to the program
Functions that are system calls: They are NOT sent directly to the kernel but have to pass to libc although it doesn't do any extra work.
Security checks (permissions, writing to unallocated mem, ...) are always done by the kernel, although libc and libraries above might also check it first.
All POSIX-compliant systems follow these rules
It might not be the same on Linux and on some other POSIX system (like FreeBSD).
On Linux, the ABI defines how a system call is done. Read about Linux kernel interfaces. The system calls are listed in syscalls(2) (but see also /usr/include/asm*/unistd.h ...). Read also vdso(7). The assembler HowTo explains more details, but for 32 bits i686 only.
Most Linux libc are free software, you can study their source code. IMHO the source code of musl-libc is very readable.
To simplify a tiny bit, most system calls (e.g. write(2)) are small C functions in the libc which:
call the kernel using SYSENTER machine instruction (and take care of passing the system call number and its arguments with the kernel convention, which is not the usual C ABI). What the kernel considers as a system call is only that machine instruction (and conventions about it).
handle the failure case, by passing it to errno(3) and returning -1.
(IIRC, on failure, the carry -or perhaps the overflow- flag bit is set when the kernel returns from SYSENTER; but I could be wrong in the details)
handle the success case, by returning a result.
You could invoke system calls without libc, with some assembler code. This is unusual, but has been done (e.g. in BusyBox or in Bones).
So the libc code for write is doing some tiny extra work (passing arguments, handling failure & errno and success cases).
Some few system calls (probably getpid & clock_gettime) avoid the overhead of the SYSENTER machine instruction (and user-mode -> kernel-mode switch) thanks to vDSO.
No you can't categorize things like that. When you program in C (but that makes no difference in almost all other languages), there is only functions and whatever is the real status of these, you call them exactly the same way. This is defined by ABI (how to pass parameters, get returned values, etc) and enforced by the compiler/linker. Of course some functions are just stubs. For example stubs to shared libraries functions (stubs may be need to load the library, dynamic link to the real function, etc) or system calls (this is more technical and differs from kernel to kernel). But from the viewpoint of your program everything is the same (this is why it is hard to understand difference between fread and read at the beginning: you call them the same way, they make almost the same job, what's the difference?).
POSIX doesn't say a single word about kernels... It just lists the C (and formerly ADA) API of a set of functions with minimal semantic (plus some command, tools, etc). Implementation of these is totally free.
In the C language, when printing something on the screen, we usually use printf, puts and so on. Which are all defined in the or other header documents.
Is there any way to print something on screen without using such functions? That is to say, how is printf realised?
Eventually the C function printf will result in a sys_write system call, directly or by going through write (see man 2 write). The actual implementation depends on the compiler and the standard libraries.
Printing to screen requires access to framebuffer (hardware) and userspace programs are not allowed to have a direct access to it. So what they do is make a system call and kernel performs the required function for them. printf -> write system call -> kernel writes the data into framebuffer and then control is given back to user program.
Even if you don't want to use printf or puts (they are implemented in hosted libc) still you have to use write system call to tell the kernel on which device you want to write the buffer.
The standard headers are not, necessarily, a library containing functions written in C code.
They are functions with C "interfase", however it's very probably that they contain explicit machine code, adapted, in each case, to the target system.
The standard headers provide, in this way, ways of doing special process that it would not be possible to achieve in strict C code.
In the specific case of printf(), the situation is even more clear, because if none header is #include-d, then there is not any mechanism through the use of the C syntax only that performs an Input/Output operation.
library ncurses can help you, but if you want to use a low level function use write() and if you want to do kernel programming you have to use printk()
I am quite new to the FILE family of functions that the standard C library provides.
I recently stumbled across fopen() and the similar functions after researching how stdout, stdin and stderr work alongside functions like printf().
I was wondering, what is needed to use fopen() on an embedded system (which doesn't necessarily have operating system support). After reading more about it, is seems like a cool thing to do on more powerful embedded systems to hook into say, a UART/SPI interface, so that calling printf() would print data out of the UART. Simarly, you could read data from a UART buffer by calling scanf().
This would also increase portability! (code written for say, Linux, would be easier to port if printf() was supported). You could also print debug data to a file if it was running in a production environment, and read from it later.
Can you just use fopen() on a bare-bones embedded system? If so who/where/when is the "FILE" then created (as far as I now, fopen() does not malloc() space for the file, nor do you specify how much)? Or do you need a operating system with FAT file support. If so, would something like http://ultra-embedded.com/?fat_filelib work? Would using FreeRTOS help at all?
Check the documentation for your toolchain's C library - it should have something to say about re-targeting the library.
For example if you are using Newlib you must re-implement some or all of the [syscalls stubs][3] to suit your target. The low level open() syscall in this case will allow fopen() to work as necessary. At its simplest, you might implement open() to support higher-level stdio access to serial ports, but if you are expecting standard file-system access, then you will still need an underlying file-system to map it too.
Another example of re-targeting the Keil/ARM standard library can be found here.
Yes, it's often possible to use fopen() and similar routines in code for embedded systems. The way it often works is that the vendor supplies a C compiler and associated libraries
targeted for their system, which implement some supported subset of the language in a way that's appropriate for that system (e.g. an implementation of printf() that outputs via a UART, or fopen() that uses RAM to simulate some sort of filesystem).
On the Keil compiler, the stdio library is designed to allow the user to define the __FILE structure in any desired fashion. A function like fprintf will perform a sequence of calls to fputc, which will receive a copy of the pointer passed to fprintf. One may define something like fopen to "create" a __FILE and populate its members via any desired means (if there will never be more than one file open at a time, one could simply fill in the fields of a static instance and return that). Variables __stdin, __stdout, and __stderror may likewise be defined as desired (stdin is defined to point to __stdin, and likewise with stdout and stderror).
"Can you just use fopen() on a bare-bones embedded system?"
It depends. Depends on the configuration of your embedded system, the types of memories interfaced, on what memory do you want to implement the file system, the file system library code size (ROM & RAM requirements).
FILE manipulation functions can be used independent of any OS. But a proper file system must be used and FAT is not the only file system (JFFS2, YAFS,...some other proprietary file system)
The file system is generally (but not always) implemented on Flash memories (Nand Flash, Nor Flash). USB device is also a flash (Nand flash). The Nand Flash & Nor Flash may have Parallel interface, I2C interface or SPI interface.
I'm looking for a way to load generated object code directly from memory.
I understand that if I write it to a file, I can call dlopen to dynamically load its symbols and link them. However, this seems a bit of a roundabout way, considering that it starts off in memory, is written to disk, and then is reloaded in memory by dlopen. I'm wondering if there is some way to dynamically link object code that exists in memory. From what I can tell there might be a few different ways to do this:
Trick dlopen into thinking that your memory location is a file, even though it never leaves memory.
Find some other system call which does what I'm looking for (I don't think this exists).
Find some dynamic linking library which can link code directly in memory. Obviously, this one is a bit hard to google for, as "dynamic linking library" turns up information on how to dynamically link libraries, not on libraries which perform the task of dynamically linking.
Abstract some API from a linker and create a new library out its codebase. (obviously this is the least desirable option for me).
So which ones of these are possible? feasible? Could you point me to any of the things I hypothesized existed? Is there another way I haven't even thought of?
I needed a solution to this because I have a scriptable system that has no filesystem (using blobs from a database) and needs to load binary plugins to support some scripts. This is the solution I came up with which works on FreeBSD but may not be portable.
void *dlblob(const void *blob, size_t len) {
/* Create shared-memory file descriptor */
int fd = shm_open(SHM_ANON, O_RDWR, 0);
ftruncate(fd, len);
/* MemMap file descriptor, and load data */
void *mem = mmap(NULL, len, PROT_WRITE, MAP_SHARED, fd, 0);
memcpy(mem, blob, len);
munmap(mem, len);
/* Open Dynamic Library from SHM file descriptor */
void *so = fdlopen(fd,RTLD_LAZY);
close(fd);
return so;
}
Obviously the code lacks any kind of error checking etc, but this is the core functionality.
ETA: My initial assumption that fdlopen is POSIX was wrong, this appears to be a FreeBSD-ism.
I don't see why you'd be considering dlopen, since that will require a lot more nonportable code to generate the right object format on disk (e.g. ELF) for loading. If you already know how to generate machine code for your architecture, just mmap memory with PROT_READ|PROT_WRITE|PROT_EXEC and put your code there, then assign the address to a function pointer and call it. Very simple.
There is no standard way to do it other than writing out the file and then loading it again with dlopen().
You may find some alternative method on your current specific platform. It will up to you to decide whether that is better than using the 'standard and (relatively) portable' approach.
Since generating the object code in the first place is rather platform specific, additional platform-specific techniques may not matter to you. But it is a judgement call - and in any case depends on there being a non-standard technique, which is relatively improbable.
We implemented a way to do this at Google. Unfortunately upstream glibc has failed to comprehend the need so it was never accepted. The feature request with patches has stalled. It's known as dlopen_from_offset.
The dlopen_with_offset glibc code is available in the glibc google/grte* branches. But nobody should enjoy modifying their own glibc.
You don't need to load the code generated in memory, since it is already in memory!
However, you can -in a non portable way- generate machine code in memory (provided it is in a memory segment mmap-ed with PROT_EXEC flag).
(in that case, no "linking" or relocation step is required, since you generate machine code with definitive absolute or relative addresses, in particular to call external functions)
Some libraries exist which do that: On GNU/Linux under x86 or x86-64, I know of GNU Lightning (which generates quickly machine code which runs slowly), DotGNU LibJIT (which generates medium quality code), and LLVM & GCCJIT (which is able to generate quite optimized code in memory, but takes time to emit it). And LuaJit has some similar facility too. Since 2015 GCC 5 has a gccjit library.
And of course, you can still generate C code in a file, fork a compiler to compile it into a shared object, and dlopen that shared object file. I'm doing that in GCC MELT , a domain specific language to extend GCC. It does work quite well in practice.
addenda
If performance of writing the generated C file is a concern (it should not be, since compiling a C file is much slower than writing it) consider using some tmpfs file system for that (perhaps in /tmp/ which is often a tmpfs filesystem on Linux)