Getting access to Raspberry PI registers in C programming - c

I am trying to get access to the register of my Raspberry Pi.
To be a bit more specific, http://www.raspberrypi.org/wp-content/uploads/2012/02/BCM2835-ARM-Peripherals.pdf has some Hardware Timers on page 172-173.
I want to use them because I have to write two functions HW_GetTimer() and HW_ClearTimer().
I can't find a good way to communicate with those registers. Is this possible? Is there an existing C function that I don't know about?

First of all, a word of warning: These registers are likely used by the Operating System, so if you fiddle with them, chances are that you break something...
That said, there are two options:
The proper one: write a kernel driver and you'll have plenty of functions to access the hardware in a sane and controlled way. Or chances are that there is already a driver that does exactly what you are trying to do, if that's the case, you just find it and use the interface it exposes. Reading the kernel source is fun!
The easy one: from user-mode land, open /dev/mem and mmap() the addresses you want to access into your process memory. Then you can read/write (use volatile pointers, please!) as you will. Note that you cannot read()/write() from/to /dev/mem, only mmap().
Obviously, for the user-mode thing you have to have the proper permissions or be root.

Guess: You are using linux.
If you are trying to do this in conjunction with Linux, there usually is a driver for (yes even for timers!) which are used internally for scheduling, tasklets and other stuff - in userspace you should use poll or epoll without any filedescriptors and just use the timeout. This will get you as close as it can get to schedulers granularity.
Another way would be to check the kernel code if the timer is used, if not you could simply export it via a kernel module, though that requires at least a basic understanding of the CPU, how the kernel works and how it is implemented without security implications or risk of crash (if not both).
I omit the bare metal way here...

Related

Using sock_create, accept, bind etc in kernel

I'm trying to implement an echo TCP server as a loadable kernel module.
Should I use sock_create, or sock_create_kern?
Should I use accept, or kernel_accept?
I mean it does make sense that I should use kernel_accept for example; but I don't know why. Can't I use normal sockets in the kernel?
The problem is, you are trying to shoehorn an user space application into the kernel.
Sockets (and files and so on) are things the kernel provides to userspace applications via the kernel-userspace API/ABI. Some, but not all, also have an in-kernel callable, for cases when another kernel thingy wishes to use something provided to userspace.
Let's look at the Linux kernel implementation of the socket() or accept() syscalls, in net/socket.c in the kernel sources; look for SYSCALL_DEFINE3(socket, and SYSCALL_DEFINE3(accept,, SYSCALL_DEFINE4(recv,, and so on.
(I recommend you use e.g. Elixir Cross Referencer to find specific identifiers in the Linux kernel sources, then look up the actual code in one of the official kernel Git trees online; that's what I do, anyway.)
Note how pointer arguments have a __user qualifier: this means the data pointed to must reside in user space, and that the functions will eventually use copy_from_user()/copy_to_user() to retrieve or set the data. Furthermore, the operations access the file descriptor table, which is part of the process context: something that normally only exist for userspace processes.
Essentially, this means your kernel module must create an userspace "process" (enough of one to satisfy the requirements of crossing the userspace-kernel boundary when using kernel interfaces) to "hold" the memory and file descriptors, at minimum. It is a lot of work, and in the end, it won't be any more performant than an userspace application would be. (Linux kernel developers have worked on this for literally decades. There are some proprietary operating systems where doing stuff in "kernel space" may be faster, but that is not so in Linux. The cost to do things in userspace is some context switches, and possibly some memory copies (for the transferred data).)
In particular, the TCP/IP and UDP/IP interfaces (see e.g. net/ipv4/udp.c for UDP/IPv4) do not seem to have any interface for kernel-side buffers (other than directly accessing the rx/tx socket buffers, which are in kernel memory).
You have probably heard of TUX web server, a subsystem patch to the Linux kernel by Ingo Molnár. Even that is not a "kernel module server", but more like a subsystem that an userspace process can use to implement a server that runs mostly in kernel space.
The idea of a kernel module that provides a TCP/IP and/or UDP/IP server, is simply like trying to use a hammer to drive in screws. It will work, after a fashion, but the results won't be pretty.
However, for the particular case of an echo server, it just might be possible to bolt it on top of IPv4 (see net/ipv4/) and/or IPv6 (see net/ipv6/) similar to ICMP packets (net/ipv4/icmp.c, net/ipv6/icmp.c). I would consider this route if and only if you intend to specialize in kernel-side networking stuff, as otherwise everything you'd learn doing this is very specialized and not that useful in practice.
If you need to implement something kernel-side for an exercise or something, I'd recommend steering away from "application"-type ideas (services or similar).
Instead, I would warmly recommend developing a character device driver, possibly implementing some kind of inter-process communications layer, preferably bus-style (i.e., one sender, any number of recipients). Something like that has a number of actual real-world use cases (both hardware drivers, as well as stranger things like kdbus-type stuff), so anything you'd learn doing that would be real-world applicable.
(In fact, an echo character device -- which simply outputs whatever is written to it -- is an excellent first target. Although LDD3 is for Linux kernel 2.6.10, it should be an excellent read for anyone diving into Linux kernel development. If you use a more recent kernel, just remember that the example code might not compile as-is, and you might have to do some research wrt. Linux kernel Git repos and/or a kernel source cross referencer like Elixir above.)
In short sockets are just a mechanism that enable two processes to talk, localy or remotely.
If you want to send some data from kernel to userspace you have to use kernel sockets sock_create_kern() with it's family of functions.
What would be the benefit of TCP echo server as kernel module?
It makes sense only if your TCP server provides data which is otherwise not accessible from userspace, e.g. read some post-mortem NVRAM which you can't read normally and to send it to rsyslog via socket.

What's the relationship between VDSO(7) and SYSCALL(2)?

From this post, I learned
syscall is the default way of entering kernel mode on x86-64.
In practice, recent kernels are implementing a VDSO
Then I look up manual, in http://man7.org/linux/man-pages/man2/syscall.2.html :
The first table lists the instruction used to transition to kernel
mode (which might not be the fastest or best way to transition to the
kernel, so you might have to refer to vdso(7)), the register used to
indicate the system call number, the register used to return the sys‐
tem call result, and the register used to signal an error.....
But I lack some essential knowledge to understand the statements.
Is it true that VDSO(7) is the implementation of syscall(2), or syscall(2) will invoke VDSO(7) to complete system call?
If it is not true, what's the relationship between VDSO(7) and SYSCALL(2)?
the VDSO(7) is not the implementation of syscall(2).
Without VDSO(7), syscall will be run in user-space applications. In this case will be occur context switching.
if use VDSO(7), will be run syscall without context switching.
The kernel automatically maps into the address space of all user-space applications with vDSO.
Read more carefully the man pages syscalls(2), vdso(7) and the wikipages on system calls and VDSO. Read also the operating system wikipage and Operating Systems: Three Easy Pieces (freely downloadable).
System calls are fundamental, they are the only way a user-space application can interact with the operating system kernel and use services provided by it. So every program uses some system calls (unless it crashes and is terminated by some signal(7)). System calls requires a user to kernel transition (e.g. thru a SYSCALL or SYSENTER machine instruction on x86) which is somehow "costly" (e.g. could take a microsecond).
VDSO is only a clever optimization (to avoid the cost of a genuine system call, for very few functions like clock_gettime(2) which also still exist as genuine system calls), a bit like some shared library magically provided by the kernel without any real file. Some programs (e.g. statically linked ones, or those not using libc like BONES or probably busybox) don't use it.
You can avoid VDSO (or not use it), and earlier kernels did not have it. But you cannot avoid doing system calls, and programs usually do a lot of them.
Play also with strace(1) to understand the (many) system calls done by an application or a running process.

Dependency of Run time library on operating system

I was going through this tutorial about how to write a minimalist kernel. I read this in between :
The Run-Time Library
A major part of writing code for your OS is rewriting the run-time library, also known as libc. This is because
the RTL is the most OS-dependent part of the compiler package: the C
RTL provides enough functionality to allow you to write portable
programs, but the inner workings of the RTL are dependent on the OS in
use. In fact, compiler vendors often use different RTLs for the same
OS: Microsoft Visual C++ provides different libraries for the various
combinations of debug/multi-threaded/DLL, and the older MS-DOS
compilers offered run-time libraries for up to 6 different memory
models.
I am kind of confused with this part. Suppose I write my kernel in C code and against the advice use the inbuilt printf() function to print something. Finally my code will be translated to the machine code. When it will be executed, processer will directly run it. Why does the author says :
inner workings of the RTL are dependent on the OS in use ?
There are two separate issues:
What will printf() do when run inside of your kernel? Most likely it will crash or do nothing, since the RTL of the C compiler you use to develop your kernel is probably assuming some runtime environment with console, operating system, etc. Even if you're using a freestanding implementation of C/C++, the runtime will likely take over serial ports or whatnot to perform the output. You don't want that, probably, since your kernel's drivers will control the I/O. So you need to reimplement the underlying file I/O from the RTL.
What will printf() do when run in a user process that runs on top of your kernel? If the kernel protects access to hardware resources, it can't do anything. The underlying file I/O code from the RTL has to be aware of how to communicate with the kernel to open whatever passes for standard input/output "files" and to perform data exchange.
You need to be aware of whether you're using a free-standing or hosted implementation of the C/C++ compiler + RTL, and all of the implications. For kernel development, you'll be using a free-standing implementation. For userspace development, you'll want a hosted implementation, perhaps a cross-compiler, but the runtime library must be written as for a hosted implementation. Note that in both cases you can use the same compiler, you just need to point it to appropriate header files and libraries. On Linux, for example, kernel and userspace development can be done using the very same gcc compiler, with different headers and libraries.
The processor has no clue what a console is, or what a kernel is. Some code has to actually access the hardware. When you take printf() from a hosted C/C++ implementation, that implementation, somewhere deep in its guts, will invoke a system call for the particular platform it was meant to run on. That system call is meant to write to some abstraction that wraps the "console". On the other side of this system call is kernel code that will push this data to some hardware. It may not even be hardware directly, it may well be userspace of another process.
For example, whenever you run things in a GUI-based terminal on a Unix machine (KDE's Konsole, X11 xterm, OS X Terminal, etc.), the userland process invoking printf() has very, very far to go before anything hits hardware. Namely (even this is simplified!):
printf() writes data to a buffer
The buffer is flushed to (written to) a file handle. The write() library function is called.
The write() library function invokes a system call that transfers the control over to the kernel.
The kernel code copies the data from the userspace pages, since those can vanish at any time, to a kernel-side non-paged buffer.
The kernel code invokes the write handler for a given file handle - a file handle, in many kernels, is implemented as class with virtual methods.
The file handle happens to be a pseudo-terminal (pty) slave. The write method passes the data to the pty master.
The pty master fills up the read buffer of given pseudo-terminal, and wakes up the process waiting on the related file handle.
The process implementing the GUI terminal wakes up and read()s the file handle. This goes through the library to a syscall.
The kernel invokes the read handler for the pty master file handle.
The read handler copies its buffered data to the userspace.
The syscall returns.
The terminal process takes the data, parses it for control codes, and updates its internal data structure representing the emulated screen. It also queues an update event in the event queue. Control returns to the event loop of the GUI library/framework. This is done through an event since those events are usually coalesced. If there's a lot of data available, it will be all processed to update the screen data structure before anything gets repainted.
The event dispatcher dispatches the update/repaint event to the "screen" widget/window.
The event handler code in the widget/window uses the internal data structure to "paint" somewhere. Usually it'd be on a bitmap backing store.
The GUI library/framework code signals the operating system's graphics driver that new data is available on the backing store.
Again, through a syscall, the control is passed over to the kernel. The graphics driver running in the kernel will do the necessary magic on the graphics hardware to pass the backing bitmap to the screen. It may be an explicit memory copy, or a simple queuing of a texture copy with the graphics hardware.
Printf() is a high-level function that can be independent of the OS. It is however just part of the puzzle, it has dependencies itself. It needs to be able to write to stdout. Which will result in low-level OS dependent system calls, like create() to open the stdout stream and write() to send printf output there. Different OSes have different system calls so there's always an adaption layer, there will be in yours.
So sure, you can make printf() work in your kernel. Actually seeing the output of calls to printf() is going to be the real problem to solve. Nothing like a terminal window in kernel mode.

how do you do system call interrupts in C?

I learned from download.savannah.gnu.org/.../ProgrammingGroundUp-1-0-booksize.pdf
that programs interrupt the kernel, and that is how things are done. What I want to know is how you do that in C (if it's possible)
There is no platform-independent way (obviously)! On x86 platforms, system-calls are typically implemented by placing the system-call code in the eax register, and triggering int 80h in assembler, which causes a switch to kernel-mode. The kernel then executes the relevant code based on what it sees in eax.
User processes usually request kernel services by calling system call wrapper functions from Standard C Library. You can do it manually with syscall(2).
The user program's interaction with the kernel is going to be very platform-specific, so it usually happens behind the scenes in the various library routines. So one just calls printf, write, select, or other library routines, which allow the programmer to write code without worrying about the details of the kernel interface, device drivers, and so forth.
And the way it usually works is that when one of those library routines needs the kernel to do something on its behalf, it performs a low-level system call that yields its control of the CPU to the kernel. It's the user program, not the kernel, that is the one being interrupted.
If you're using glibc (which you probably are if you are using gcc and linux) then there is a syscall function in unistd.h that you can use. It has different implementations for different architectures and operating systems, but the implementation is done in assembly (could be inline assembly). syscall has a man page, so:
man syscall
will give you some info.
If you are just curious about how all of this works then you should know that this has changed in Linux on x86 in recent years. Originally interrupt 0x80 was used by Linux as the normal system call entry point on x86. This worked well enough, but as processors got more advanced pipelining (starting an instruction before previous instructions have completed) interrupts have slowed down (relative to execution of regular code which has sped up, though some tests have shown that it has slowed down more than that). The reason for this is that even when the int instruction is used to trigger an interrupt it works mostly the same as hardware triggered interrupts, which occur unpredictably, which causes them not to play nice with the pipelining of instructions (pipelining works better when code paths are predictable).
To help with this newer x86 processors have instructions specifically intended for making system calls, but Intel and AMD use different instructions for this (sysenter and syscall, respectively). Additionally the Intel systenter instruction clobbers a general purpose register that Linux has used on x86_32 to pass a parameter to the kernel. This means that programs have to know which of 3 possible system call mechanisms to use as well as possibly different ways of passing arguments to the kernel. To get around all of this newer kernels map a special page of memory into programs (this page is called vsyscall and if you cat /proc/self/maps you will see an entry for it) that contains code for the system call mechanism that the kernel has determined should be used on the system, and newer versions of glib can implement their system call entry using the code in this page.
The point of all of this is that this isn't as simple as it used to be, but if you are just playing around on an x86_32 then you should be able to use the int 80h instruction because that will be supported on systems that can use one of the other mechanisms for backwards compatibility.
In C, you don't really do it directly, but you'll end up doing this indirectly any time you use library functions that end up invoking system calls. File access, network access, etc, are typical examples of this.
Those functions will all end up "trapping" to the kernel, which will handle the request.

input and output without a library in C

I'm writing a small kernel for my programs in C.
This is not (at the moment) an OS kernel, it's merely a way for me to keep track of input and output in programs without relying on external source (i.e. stdio.h). You might ask me why I'd ever want to do this; it's just so I know how this works, and so that I have more, and more (end goal is total) control of program flow.
I was wondering if anyone knows some tutorials on input and output in C (with inline asm?) without relying on any other code.
There is a lot of room between the bare metal and stdio. You have said you aren't writing an OS kernel, but not whether or not you are running under an OS.
Running directly on hardware without an OS, you will still want to encapsulate all of your I/O operations in a module, even if you don't formally define a device driver interface and framework for all of your I/O modules to follow. This is hugely architecture dependent, and makes you responsible for knowing all of the details of interaction with every I/O device you might ever use. For some devices, this can quickly become a huge development effort. That isn't a problem for embedded systems, but running on commercial hardware this way is neither easy nor recommended.
Running within an OS, you probably don't get (and shouldn't want to get) access to the actual hardware registers and interrupts. If you are developing a custom I/O device, the best practice is to make it conform to existing standards so that you need as little low level custom software for it as possible. This is why you see a lot of custom user interface gadgets connecting via USB and identifying themselves as HIDs (Human Interface Devices). As a HID, the existing USB drivers take care of the physical layer, and the OS-supplied HID driver takes care of the logical interface, providing a very simple high level access API to the application.
One of the operating system's key roles is to provide a consistent I/O API across all devices. Generally, that takes the form of open(), close(), read(), write(), and ioctl() functions (the names vary, but some form of at least the first four will always exist). The OS layer is quite raw, however. Typically, an OS call is forwarded without much processing to a device driver, which then forwards the data on to the device. Usually, the OS low level calls block the caller until they complete, and often they have restrictions on the sizes of the buffers that make sense. For instance, raw access to a disk device is usually required to be for an integral number of disk blocks at a time.
And don't forget about things like file systems and network protocols... all of which are made much more reliable and compatible by encapsulation within an operating system.
Even if it is acceptable to call read() and write() for single characters, that is usually not the best performance possible. Operating system calls are relatively expensive, and if you can read multiple characters in a single call, your performance can go way up.
That is the origin of the stdio library for C, and various other buffering libraries in other environments. The stdio library provides a buffering layer that isolates the C code from the block size of the underlying hardware. Even on an entirely home-grown operating system where you have full control over all the devices, something like C stdio will still be valuable.
Writing your own stdio replacement is a highly valuable exercise, even if you don't use it in production code, and is one I would recommend to anyone wanting to learn about what really goes on between printf() and scanf() and the terminal or files.
One valuable resource is the book The Standard C Library by P.J. Plauger. In it, the author presents an implementation of the complete C runtime library specified in the ANSI standard. His discussion of the specific implementation choices he made is valuable and apropos to the context of this question, and the discussions of why some of the standard library features were specified is interesting as well.
This sort of thing is very architecture specific. To put it simply, your I/O devices will raise hardware interrupts to the CPU. The CPU will call the code associated with the interrupt which will deal with it appropriately; for an input device it will fetch the data that is available from the device, for an output device the interrupt usually means that the device is ready to send the next piece.
The old 8088/8086 CPU architecture is a nice simple place to start to get your head around this. Typically, the BIOS would be where the hardware interrupts would have been handled, but it was always possible to write your own. ;)

Resources