I am trying to include math.h in my Linux kernel module. If I use,
#include '/usr/include/math.h'
It give me theses errors:
error: features.h: No such file or directory
error: bits/huge_val.h: No such file or directory
error: bits/mathdef.h: No such file or directory
error: bits/mathcalls.h: No such file or directory
Why is this?
You cannot use the C library in a kernel module, this is even more true for the math library part.
You can't include a userspace C module in kernel space. Also are you sure that you want to be doing this? This thread may help http://kerneltrap.org/node/16570. You can do math functions inside the kernel, just search around on http://lxr.linux.no/ for the function you need.
Standard libraries are not available in the kernel. This includes libc, libm, etc. Although some of the functions in those libraries are implemented in kernel space, some are not. Without knowing what you're trying to call, it's impossible to say for sure whether or not you should be doing what you're trying to do in kernel space.
I should further note that the kernel does NOT have access to the FPU. This is to save time when switching tasks (since saving the FPU registers would add unnecessary overhead when performing context switches). You can get access to the FPU from kernel space if you really want it, but you need to be really careful not to trash the user space's FPU registers when doing so.
Edit: This summarizes the caveat about the FPU much better than I did.
Floating point operations is not supported in the kernel. This is because when switching from kernel context to user context, registers must be saved. If the kernel would make use of floating point, then also the floating point registers would have to be saved also, which would cause bad performance for each context switch. So because floating point is very rarely needed, especially in the kernel it is not supported.
If you really have to:
maybe you could compile your own kernel with floating point support
you could block context switch within your floating point operations
the best would be to use fixed point arithmetics.
AFAIK kernel space is separated from user space, and so should the source code. /usr/include is for general programming.
This suggests that doing floating point math in the kernel is not as simple is in user-space code. Another instance suggesting that this is hard.
Still looking for a more definitive answer.
well you cannot, you can rewrite functions you need in your module, it's dirty but it should work...
Thanks a lot for your comments
To use math functions
Is it possiable to make a plane C application and pass variables from kernel source file. So the C Application will compute the variables and sends back the information .
Kernel source file (kernel space) ---> C Application (user space)
|
<---|
Kernel source file
So we may include header file in kernel source code. In case of any event, it pass the values to a C application (user space)
Details:
I am trying to modify my HID joystick events(absolute x, y) So It may only move to the improved location, which will be genarated by my application, with some math functions like (pow, tan,etc).
So I used hid-input.c to get raw events, and modify them. which will be used for input subsystem through hid kernel module –
Looking for your comments
Regards.
You cannot (often, without lots of kernel know-how to lock and preserve these registers while not impacting other critical sections) use floating point registers in the kernel, and besides it is of course inappropriate to do "processing" in the kernel. Many others have mentioned this. Performance will be terrible. Thus, math.h is not provided for kernel modules. We accept this and move on...
However, as I am also a victim of crazy requirements and completely insane designs forced on us by others, this is a legitimate question. After reducing the usage of the math.h API to minimize the performance impact, you can use floating point emulation (soft-float) via correct compiler settings to implement your required functions without using floating point registers. Kernel code should already compile with these soft-float settings.
In order to implement math.h functionality, you can look at glibc or uClibc and perhaps others. Both of these libraries have generic "C" implementations of libm which implement math.h without the use of special intrinsics or platform specific types and should therefore compile just fine in the kernel.
uClibc: The above link takes you directly to the libm section of uClibc.
glibc: After "git"-ing glibc, you'll find what you're looking for in glibc/sysdeps/ieee754/flt-32.
glibc may be more difficult to understand because it is more sophisticated and has more inter-dependencies within itself, but uClibc only provides (at the moment) C89 math.h. If you want single precision (read: faster) or complex math functionality as in C99+, you'll have to look at glibc.
Maybe try using double quotes (") instead of single quotes?
In experts view , its NOT a good approach to communicate data between kernel space and user space. Either fully work on kernel space OR only on user space.
But one solution can, use read() and write() command in a kernel module to send the information between user space and kernel space.
Related
Imagine a situation where you can't or don't want to use any of the libraries provided by the compiler as "standard", nor any external library. You can't use even the compiler extensions (such as gcc extensions).
What is the remaining part you get if you strip C language of all the things a lot of people use as a matter of course?
In such a way, probably a list of every callable function supported by any big C compiler (not only ANSI C) out-of-box would be satisfying as as answer as it'd at least approximately show the use-case of the language.
First I thought about sizeof() and printf() (those were already clarified in the comments - operator + stdio), so... what remains? In-line assembly seem like an extension too, so that pretty much strips even the option to use assembly with C if I'm right.
Probably in the matter of code it'd be easier to understand. Imagine a code compiled with only e.g. gcc main.c (output flag permitted) that has no #include, nor extern.
int main() {
// replace_me
return 0;
}
What can I call to actually do something else than "boring" type math and casting from type to type?
Note that switch, goto, if, loops and other constructs that do nothing and only allow repeating a piece of code aren't the thing I'm looking for (if it isn't obvious).
(Hopefully the edit clarified wtf I'm actually asking, but Matteo's answer pretty much did it.)
If you remove all libraries essentially you have something similar to a freestanding implementation of C (which still has to provide some libraries - say, string.h, but that's nothing you couldn't easily implement yourself in portable C), and that's what normally you start with when programming microcontrollers and other computers that don't have a ready-made operating system - and what operating system writers in general use when they compile their operating systems.
There you typically have two ways of doing stuff besides "raw" computation:
assembly blocks (where you can do literally anything the underlying machine can do);
memory mapped IO (you set a volatile pointer to some hardware dependent location and read/write from it; that affects hardware stuff).
That's really all you need to build anything - and after all, it all boils down to that stuff anyway, the C library of a regular hosted implementation is normally written in C itself, with some assembly used either for speed or to communicate with the operating system1 (typically the syscalls are invoked through some kind of interrupt).
Again, it's nothing you couldn't implement yourself. But the point of having a standard library is both to avoid to continuously reinvent the wheel, and to have a set of portable functions that spare you to have to rewrite everything knowing the details of each target platform.
And mainstream operating systems, in turn, are generally written in a mix or C and assembly as well.
C has no "built-in" functions as such. A compiler implementation may include "intrinsic" functions that are implemented directly by the compiler without provision of an external library, although a prototype declaration is still required for intrinsics, so you would still normally include a header file for such declarations.
C is a systems-level language with a minimal run-time and start-up requirement. Because it can directly access memory and memory mapped I/O there is very little that it cannot do (and what it cannot do is what you use assembly, in-line assembly or intrinsics for). For example, much of the library code you are wondering what you can do without is written in C. When running in an OS environment however (using C as an application-level rather then system-level language), you cannot practically use C in that manner - the OS has control over such things as I/O and memory-management and in modern systems will normally prevent unmediated access to such resources. Of course that OS itself is likely to largely written in C (and/or C++).
In a standalone of bare-metal environment with no OS, C is often used very early in the bootstrap process initialising hardware and establishing an application execution environment. In fact on ARM Cortex-M processors it is possible to boot directly into C code from reset, since the hardware loads an initial stack-pointer and start address from the vector table on start-up; this being enough to run C code that does not rely on library or static data initialisation - such initialisation can however be written in C before calling main().
Note that sizeof is not a function, it is an operator.
I don't think you really understand the situation.
You don't need a header to call a function in C. You can call with unchecked parameters - a bad idea and an obsolete feature, but still supported. And if a compiler links a library by default instead of only when you explicitly tell it to, that's only a little switch within the compiler to "link libc". Notoriously Unix compilers need to be told to link the math library, it wasn't linked by default because some very early programs didn't use floating point.
To be fair, some standard library functions like memcpy tend to be special-cased these days as they lend themselves to inlining and optimisation.
The standard library is documented and is usually available, though in effect deprecated by Microsoft for security reasons. You can write pretty much any function quite easily with only stdlib functions, what you can't do is fancy IO.
For example, I have a function func():
int func (int a, int b) {return a + b;}
Now I want write it to a file, so that I can use the system-call mmap to load it with PROT_EXEC and I can call it from another program.What should I do for it?
If you know what signature you need and a static library or the location of a shared library at compile time, you probably just want to include the header and link against the output library. If you want to invoke a function dynamically, you probably want dlopen / dlsym (UNIX) or LoadLibrary / GetProcAddress (Windows) for loading the libary dynamically and retrieving the address of the function by name.
Note that the cases where you actually need to load a library dynamically (at least explicitly) are pretty rare. This is often used for modular architectures (e.g. "plugins" or "extensions") where individual pieces of the application are distributed separately (which can be achieved more securely using IPC rather than dynamic loading... see my note below). Or for cases where your application is not allowed to include dependencies statically and needs to conditionally supply behavior based on the existence of certain library dependencies in the environment in which it happens to be executing. In most cases, though, you'll simply want to include a header that declares the symbols you need and compile for each target platform (possibly using #if...#else macros if there are symbols that vary across OSes or OS versions).
From a stability, security, and code complexity standpoint, I personally recommend that you avoid dynamic library loading. For core system functionality, it's reasonable to link against a dynamic library, but you'll want to do it in a way where the burden of dynamic loading is entirely on your toolchain (i.e. you shouldn't need to call dlopen or LoadLibrary explicitly). For other functionality, it is almost always better to statically link (assuming you distribute updates when there are security fixes for your dependencies), since this will avoid you getting broken by incompatible version updates and also prevent your users from experiencing dependency hell (you require version A but some other application requires version B); modular architectures are often better (and more securely) achieved through inter-process communication (IPC), since dynamically loaded libraries live in the process of the program that loads them (thereby giving them access to the entire process's virtual memory space), whereas with interprocess-communication, each component would be a separate process, and individual components would only have access to information that was given to it explicitly by the calling process, which would make it more difficult for a malicious component to steal data from the caller or other components or to produce instability.
The sanest thing if you want this to actually be used in the real world is probably to just compile the source as part of your program on each platform, like a regular function.
Next best is probably a separate process that you talk to rather than merge with.
Semi-sane (but still not a great choice, see our discussion in the other answer) would be making the shared library, like Michael Aaron Safyan said.
But if you want to know how it works just because - say, you want to write your own dynamic linker, or are doing some kind of runtime code generation like a JIT compiler, or if you just wanna know - you can make a raw code file.
To use it, what we'd have to do is similar to what the linker does - load the code at a particular address that it is made to work on and run it. There is position independent code that can run at any address, too.
Let's first get our function compiled and linked, then output into a raw image for a certain address. Assume the function is func in the file func.c and we're using gcc on Linux. (A Windows compiler would have similar options - gcc on Windows is exactly the same, I believe, but something like Digital Mars's C compiler does it differently with the linker command being /BINARY for instance)
Anyway, here's what I ran:
gcc -c func.c # makes func.o
ld func.o --oformat=binary -e func -o func.binary
This generates a file called func.binary. You can disassemble it most easily with ndisasm -b 64 func.binary (or -b 32 if you compiled the C in 32 bit mode) to confirm it looks right - I see an add instruction there, so looks good to me.
If you loaded that and mmaped then called it... it should work.
Problems will be quick to come up though:
If there's more than one function in that file, they'll all be squished together.
The addresses they try to use to call each other may be totally wrong.
Global variables and other static data will be messed up.
And there's more. The operating system uses more complex file formats for executables and libraries for a reason!
To go to the next step, you could consider writing an ELF or PE loader which reads that metadata off a standard file. Of course, once you get into much of this, you'll be doing exactly what the OS provides with dlopen and LoadLibrary.... so unless the goal is to just learn about the guts, just call those functions and call it done!
I have performance critical code written for multiple CPUs. I detect CPU at run-time and based on that I use appropriate function for the detected CPU. So, now I have to use function pointers and call functions using these function pointers:
void do_something_neon(void);
void do_something_armv6(void);
void (*do_something)(void);
if(cpu == NEON) {
do_something = do_something_neon;
}else{
do_something = do_something_armv6;
}
//Use function pointer:
do_something();
...
Not that it matters, but I'll mention that I have optimized functions for different cpu's: armv6 and armv7 with NEON support. The problem is that by using function pointers in many places the code become slower and I'd like to avoid that problem.
Basically, at load time linker resolves relocs and patches code with function addresses. Is there a way to control better that behavior?
Personally, I'd propose two different ways to avoid function pointers: create two separate .so (or .dll) for cpu dependent functions, place them in different folders and based on detected CPU add one of these folders to the search path (or LD_LIB_PATH). The, load main code and dynamic linker will pick up required dll from the search path. The other way is to compile two separate copies of library :)
The drawback of the first method is that it forces me to have at least 3 shared objects (dll's): two for the cpu dependent functions and one for the main code that uses them. I need 3 because I have to be able to do CPU detection before loading code that uses these cpu dependent functions. The good part about the first method is that the app won't need to load multiple copies of the same code for multiple CPUs, it will load only the copy that will be used. The drawback of the second method is quite obvious, no need to talk about it.
I'd like to know if there is a way to do that without using shared objects and manually loading them at runtime. One of the ways would be some hackery that involves patching code at run-time, it's probably too complicated to get it done properly). Is there a better way to control relocations at load time? Maybe place cpu dependent functions in different sections and then somehow specify what section has priority? I think MAC's macho format has something like that.
ELF-only (for arm target) solution is enough for me, I don't really care for PE (dll's).
thanks
You may want to lookup the GNU dynamic linker extension STT_GNU_IFUNC. From Drepper's blog when it was added:
Therefore I’ve designed an ELF extension which allows to make the decision about which implementation to use once per process run. It is implemented using a new ELF symbol type (STT_GNU_IFUNC). Whenever the a symbol lookup resolves to a symbol with this type the dynamic linker does not immediately return the found value. Instead it is interpreting the value as a function pointer to a function that takes no argument and returns the real function pointer to use. The code called can be under control of the implementer and can choose, based on whatever information the implementer wants to use, which of the two or more implementations to use.
Source: http://udrepper.livejournal.com/20948.html
Nonetheless, as others have said, I think you're mistaken about the performance impact of indirect calls. All code in shared libraries will be called via a (hidden) function pointer in the GOT and a PLT entry that loads/calls that function pointer.
For the best performance you need to minimize the number of indirect calls (through pointers) per second and allow the compiler to optimize your code better (DLLs hamper this because there must be a clear boundary between a DLL and the main executable and there's no optimization across this boundary).
I'd suggest doing these:
moving as much of the main executable's code that frequently calls DLL functions into the DLL. That'll minimize the number of indirect calls per second and allow for better optimization at compile time too.
moving almost all your code into separate CPU-specific DLLs and leaving to main() only the job of loading the proper DLL OR making CPU-specific executables w/o DLLs.
Here's the exact answer that I was looking for.
GCC's __attribute__((ifunc("resolver")))
It requires fairly recent binutils.
There's a good article that describes this extension: Gnu support for CPU dispatching - sort of...
Lazy loading ELF symbols from shared libraries is described in section 1.5.5 of Ulrich Drepper's DSO How To (updated 2011-12-10). For ARM it is described in section 3.1.3 of ELF for ARM.
EDIT: With the STT_GNU_IFUNC extension mentioned by R. I forgot that was an extension. GNU Binutils supports that for ARM, apparently since March 2011, according to changelog.
If you want to call functions without the indirection of the PLT, I suggest function pointers or per-arch shared libraries inside which function calls don't go through PLTs (beware: calling an exported function is through the PLT).
I wouldn't patch the code at runtime. I mean, you can. You can add a build step: after compilation disassemble your binaries, find all offsets of calls to functions that have multi-arch alternatives, build table of patch locations, link that into your code. In main, remap the text segment writeable, patch the offsets according to the table you prepared, map it back to read-only, flush the instruction cache, and proceed. I'm sure it will work. How much performance do you expect to gain by this approach? I think loading different shared libraries at runtime is easier. And function pointers are easier still.
I'm currently learning about operating systems the use of traps to facilitate system calls within the Linux kernel. I've located the table of the traps in traps.c and the implementation of many of the traps within entry.S.
However, I'm instructed to find an implementation of two system calls in the Linux kernel which utilize traps to implement a system call. Although I can find the definition of the traps themselves, I'm not sure what a "call" to one of these traps within the kernel would look like. Therefore, I'm struggling to find an example of this behavior.
Before anyone asks, yes, this is homework.
As a note, I'm using Github to browse the kernel source, since kernel.org is down:
https://github.com/torvalds/linux/
For the x86 architecture the SYCALL_VECTOR (0x80) interrupt is used only for 32bit kernels. You can see the interrupt vector layout in arch/x86/include/asm/irq_vectors.h. The trap_init() function from traps.c is the one that sets the trap handler defined in entry_32.S:
set_system_trap_gate(SYSCALL_VECTOR, &system_call);
For the 64bit kernels, the new SYSENTER (Intel) or SYSCALL (AMD) intructions are used for performance reasons. The syscall_init() function from arch/x86/kernel/cpu/common.c sets up the "handler" defined in entry_64.S and bearing the same name (system_call).
For the user-space perspetive you might want to take a look at this page (a bit outdated for the function/file names).
I'm instructed to find an implementation of two system calls in the Linux kernel which utilize traps to implement a system call
Every system call utilizes a trap (interrupt 0x80 if I recall correctly) so the "kernel" bit will be turned on in PSW, and privileged operations will be available to the processor.
As you mentioned the system calls are specified in entry.S under sys_call_table: and they all start with the "sys" prefix.
you can find the system call function header in: include/linux/syscalls.h, you can find it here:
http://lxr.linux.no/#linux+v3.0.4/include/linux/syscalls.h
Use lxr (as the comment above have already mentioned) in general in order to browse the source code.
Anyhow, the function are implemented using the SYSCALL_DEFINE1 or othe versions of the macro, see
http://lxr.linux.no/#linux+v3.0.4/kernel/sys.c
If you're looking for an actual system call, not an implementation of a system call, maybe you want to check some C libraries. Why would a kernel include a system call? (I'm not talking about a system call implementation, I'm talking about e.g. an actual chdir call for example. There is a chdir system call, which is a request for changing the directory and there is a chdir system call implementation which actually changes it and must be somewhere in the kernel). Ok, maybe some kernels do include some syscalls too but that's another story :)
Anyway, if I get your question right, you're not looking for an implementation but an actual call. GNU libc is too complicated for me, but you can try browsing the dietlibc sources. Some examples:
chdir.S
syscalls.h
I need to build a OS, a very small and basic one, with actually least functionality, coded in C.
Probably a CUI OS which does some memory management and has at least a text editor and a calculator, its just going to be a experimentation about how to make a code that has full and direct control over your hardware.
Still I'll be requiring an interface, that will need input/output functions like printf(&args), scanf(&args). Now my basic question is should I use existing headers or go for coding actually from scratch, and why so ?
I'd be more than very thankful to you guys for and help.
First, you can't link against anything from libc ... you're going to have to code everything from scratch.
Now having worked on a micro-kernel myself, I would not use the actual stdio headers that come with libc since they are going to be cluttered with a lot of extra information that will be either irrelevant for your OS, or will create compiler errors due to missing definitions, etc. What I would do though is keep the function signatures for these standard functions the same ... so in the end you would have a file called stdio.h for your OS, but it would be a very stripped down header file with the basic minimum requirements for your needs, and only having the standard I/O functions you need, with the correct standard signatures.
Keep in mind on the back-end, i.e., in your stdio.c file, you're going to have to point these functions to a custom console-driver or some other type of character drive for your display. Either that, or you could just use them as wrappers for some other kernel-level display printing routine. You are also going to want to make sure that even though you may use a #include <stdio.h> directive in your other OS code modules to access these printing functions, you do not link against libc. This can be done using gcc -ffreestanding.
Just retarget newlib.
printf, scanf, etc relies on implementation specific funcions to get a single char or print a single char. You can then make your stdin and stdout the UART 1 for example.
Kernel itself would not require the printf and scanf functions, if you do not want to keep the kernel in kernel mode and work the apps you have planned for. But for basic printf and scanf features, you can write your own printf and scanf functions, which would provide basic support for printing ans taking input. I do not have much experience on this, but you can try make a console buffer, where the keyboard driver puts the read in ASCII characters (after conversion from scan codes), and then make the printf and scanf work on it. I have one basic implementation were i have wrote a gets instead of scanf and kept things simple. To get integer output you can write an atoi function to convert the string to a number.
To port in other libraries, you need to make the components which the libraries depend on. You need to make the decision if you can code in those support in the kernel so that the libraries could be ported in. If it is more difficult then coding some basic input output functions i think won't be bad at this stage,